Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

day 6 of #100daysofhomelab

Day 6 of #100daysofhomelab and I have some progress on my Kubernetes cluster!

Then after a few min, it comes back online…

Also, I got WordPress installed in Kubernetes! Now to migrate this blog over… Hopefully before my next update tomorrow…

day 5 of #100daysofhomelab

Day 5 of #100daysofhomelab and its mostly reading… the daddy was in the hospital for the last 2 weeks, including over Christmas day, so tomorrow is Christmas day for us… Turkey, ham and all the usual stuff… So, been busy with that. But have been reading a couple of docs, so some links for today:

That’s about it for today… I’ll be back tomorrow… hopefully…

day 4 of #100daysofhomelab

Day 4 of #100daysofhomelab and I am still reading the docs I posted yesterday on Kubernetes. I hope to get something sorted this weekend… On a different note, I posted a new YouTube video on the iODD ST400, linked below. This is a follow-up to my iODD Mini review I did a couple of years back. Hopefully, I will have a second video with some speed tests and a better walk in the next few days… hopefully.

Update: I think I am going to have to get my i7 with 6 2.5Gb Ethernet ports and one of the R720s up and running soon… I am running out of memory on my Proxmox cluster.

day 3 of #100daysofhomelab

Day 3 of #100daysofhomelab and more Kubernetes messing today. Haven’t got it working, but messing with it is a start. Some links and notes are below:

I am planning on moving my WordPress install over from my Docker host to Kubernetes in the next few days, so running through the docks from Bharathiraja above, but I keep getting errors related to MySQL… More digging is required. I use Cloudflare Tunnels to secure my WordPress install, so the docs on how to use Cloudflare Tunnels with Kubernetes are important…

Day 2 of #100daysofhomelab

Day 2 of #100daysofhomelab and more messing with Kubernetes… So far, I have built, torn down, rebuilt and torn down a second time… and now building for a third time! Techno Tims Ansible scripts for the Win! A couple of notes for today:

  • the script uses K3s version 1.24.8-k3s1. at some stage yesterday I tried changing this to 1.26.0-k3s1, the latest version from the K3s GitHub page… This was a bad idea. Rancher does not like this, and, well, I don’t know what I am doing, so I want to see what Rancher does…
  • ideally, you would have multiple master nodes, but, me being the lazy git that I am, only set up 1… but it does look like it could be changeable later on…
  • I have a total of 6 VMs running my K3S cluster: 3 are 4 Cores with 8GB RAM running on GodBoxV2, which is now running Proxmox. The other 3 are each running on my HP Micro Server, the Quad 2.5Gb Celeron Box and an 8th Gen Intel NUC… each is given 2 cores and 4GB RAM. That gives me a total of (roughly) 18 cores and 36GB RAM. Each VM has around 50 GB of storage and using Longhorn, I have around 250GB of space (master does not seem to contribute space). Replicas are set to 3, so not quite a full 250GB.
  • Why Kubernetes? Well, I have 2 VMs currently running my fleet of docker containers. I have lost count of how many i actually have. So, my plan is to use Kubernetes to move all them from those single docker boxes, and have them more distributed and more HA. This will allow me to move stuff around easier, or at least i think it will… At the very least, i get to play with new tech! 🙂

More work on the cluster is required. This blog is hosted in-house on one of the docker instances… Hopefully, at some stage, it will be moved to the K3s cluster! That would be the first major move!

Day 1 of #100daysofhomelab

I have decided to start my #100daysofhomelab journey again, so today is day 1. I have been working on a K3s cluster in the house, and so far, I have to start again… going to rebuild it again tomorrow at some stage…

Lots of Links

some notes for myself:

Service Account for Dashboard

to create the Service account, create a file, ca.yml, and enter the following:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: <username>
  namespace: kube-system

next, create a file called cluster-role-binding.yml with the following:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: <username>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: <username>
  namespace: kube-system

make sure username matches!

run the following commands:

kubectl apply -f sa.yml
kubectl apply -f cluster-role-binding.yml
kubectl -n kube-system create token <username>

Installing OpenSCSI and NFS (required for Longhorn) with Ansible

Ansible Script

---
- hosts: k3s
  become: true
  
  tasks:
  - name: Update and upgrade apt packages
    become: true
    apt:
      upgrade: yes
      update_cache: yes
      cache_valid_time: 600 
  - name: install packages
    become: true
    apt: 
      pkg:
      - nfs-common
      - open-iscsi

  - name: Make sure open-iscsi is enabled and running
    ansible.builtin.systemd:
      enabled: true
      state: started
      name: open-iscsi

Bulk updating Tasmota Devices over MQTT #100daysofhomelab

I have a load of these Smart Plugs from GoSund around the house (currently around 11, but more are still in boxes). The handy part of these is they can be re-programmed using Tuya Convert and using the following config you can get power usage and an on/off switch. I have mine hooked up to an MQTT server, and with the MQTT plug-in to Home Assistant, I get all the details about power usage and can control each device I need to (hence the 11 of them!).

But MQTT can be used for more than monitoring. I can send commands to the devices. Given that all of them are on a locked-down network, and only have access to the NTP server, internal DNS and the MQTT box, I needed to figure out how to get OTA updates to the box. Luckily, you can change the OTA update URL on the web interface and download it from a local endpoint… But, I am a Lazy Git, so I needed to figure out an automated way. Enter MQTT again.

First, you will need to log in to your Tasmota device, go to configuration, MQTT and Enter your MQTT host. Also, get your topic name while you are there and keep it handy.

Hit save and wait a few seconds for it to update. You need to watch the MQTT messages going through. I am using MQTT Explorer to see all messages.

For me, the tele topic has all my devices listed, plus stats around power, status, etc.

On the Tasmota site, they have documentation on sending commands over MQTT. I then installed the MQTT CLI on the mac (so I could automate this later) and ran the following commands:

mqtt pub -t cmnd/tasmota_<deviceid>/OtaUrl -m "<internal url hosting tasmota updates>/tamota.bin" -h <mqtt host> -p <mqtt port>

mqtt pub -t cmnd/tasmota_<deviceid>/Upgrade -m "1" -h <mqtt host> -p <mqtt port>

update <deviceid>, URLs, host and ports as required. for the internal URL, I just have a small copy of Nginx running in docker, and serving a folder with copies of the latest OTA files from the Tasmota Release page. I just wanted all the files and put them in the folder shared by NGinx. I need to automate it a bit better… maybe next time there is a new release?

the first one tells the device where the latest OTA file is. the second command kicks off the update. If your devices are not segregated, you can just leave the existing OTA Url there and kick off the upgrade task on its own… I wasn’t that brave…

Within a couple of seconds, you will start to see messages showing up in MQTT Explorer. After a couple of minutes, all devices will have been upgraded and rebooted (no power down, luckily) and all is good!

DNSControl and Github Actions #100daysofhomelab

I am participating in the #100daysofhomelab challenge and have been posting a lot on Twitter as @tiernano, but some posts and tasks I am doing will require longer-form write-ups. So, some updates will include either Videos (which will be published on my Youtube Channel) or blog posts, which will go here. This is the first of the blob posts.

DNSControl is a tool written by the Stackoverflow lads (when they called themselves StackExchange). It is designed to update DNS records and can work with DNS providers and registrars. I use it to update records in Cloudflare and Route53, but many providers are available. I wrote an article a while back about how to create reverse DNS records for IP space with Route53 and DNSControl, but most of it is still relevant, and the main documentation site for DNSControl has a lot of useful tips.

Up till this morning, if I wanted to update a record, I checked out the DNS records from my private Github repo, made the change, and ran the DNSControl commands on my machine (check for syntax checking the file, preview to show what will change at the provider level, and push to make the changes). But I wanted to have some automation for this. So, enter Github Actions.

I did a bit of digging and found a Github Action from koenrh called dnscontrol-action. The docs on this are quite simple to go through, so I created 2 action files for my Repo: preview and push. a Gist for Preview is below:

and the one for push is as follows:

The important parts are as follows:

In both preview and push, the check command does a syntax check of your DNS config file. Then preview will check the providers to see if any records need an update. When push runs, it will make the changes.

All my required secrets are set in the Github repo as secrets, so when the action is run, it will pull the required keys out. These are put into the environment variables. I use name.com and a registrar for some domains (though most have now moved to Cloudflare, and some, like my .ie domains, are with Blacknight, who are not supported on DNSControl). Cloudflare is used by the majority of my domains, and Route53 is used for 2 domains currently. There are around 53 domains current managed by this, and the plan is to add more. I also plan on getting some more automation around checking configs and sending alerts if anything changes.

So, enough “How it works” and show us it working!

Right. Let’s update my zt.tiernanotoole.net domain, which is used for Zerotier IPs internal to my network. It’s been a while since I did this, so most will be removed and a few adds… first, I create a new branch, called zt-update, and check it out in VSCode. I made my changes, git committed and git pushed to the branch.

at this stage, the actions have NOT run, since this is neither checked in to master, nor a PR for master.

I go into the create PR section, and I can see the changes I have made. in my case, I removed a load of unused records and added extras:

I now create my PR and wait for the checks to complete:

within a short time, I get an alert that all checks have passed, and I can find the results of the changes in the build (It was meant to add a note to the PR with the details, but I might be missing something in my config…)

Also, not sure why it is redacting part of my name here…

I check the rest of the list, and other than the deletes and creates in route53 for this domain, there are no other changes. So, being happy with that, I click the Merge Pull Request and the code is checked into master, and the DNSControl push command runs:

If i now go into Route53, i can see the records on the site:

Happy days! Next challenges to fix:

  • fix the PR to include the output of check and preview
  • only run a check and push on the master branch, and no need to run preview again…
  • run preview once a week and send alerts if anything has changed

Till next time, good luck!

Unifi Network Update 7.1.61

A few weeks back, Ubiquiti released a pre-release update for the Unifi Network Controller, version 7.1.61. It got installed on my UDM and I noticed a few interesting bits that you might find handy… First, you will need to be signed up for Unifi Early Access before you can download or even read the release notes, but this is just a quick update based on my findings so far.

The first thing to note: You can see the list of devices connected to switches on the Overview Tab. I can’t remember exactly when that was added, but I think it’s new…

Under the ports tab, you now have a ports insight option:

Clicking this give you:

You can also select multiple ports and make changes at a bulk level:

You can also see a bit more info about each port:

Teleport VPN is also now added. This makes giving someone access to your network a LOT easier than usual. They will need the WifiMan software on Android, iOS or Mac to join. Not sure what happens on a Windows machine… Maybe it’s coming soon? To use it, just generate a new link and send it to your user. Not sure how to remove them afterwards (if you want to give them temp access for example…)

Final Interesting part, and something I have been waiting for for a while, under Traffic Management, you can now create custom traffic rules:

You can set it based on destination Domain Name, IP or even the full internet:

And you can set the Source to be All Devices, group of devices (network) or individual (or multiple targeted) devices.

Finally, you can set the output internet connection.

If you had multiple internet connections, and one had better speeds for stuff like Netflix, or you wanted to send bulk data over a different link, you can do this using this feature. Very cool stuff.

So, still testing, but looking good so far.