Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Back running WordPress

I have moved my blog back over to WordPress. It is running in house, on one of my workstations, using Cloudflare’s Argo tunnel to protect it on the internet. You might be asking “why?!” Well, its a couple of things.

  • Easier to blog and post from anywhere in the world.
  • I can blog on pretty much anything
  • No having to worry about upgrading my copy of Hugo breaking my site…

That last one is the reason I haven’t blogged in a while. Seems there was a major change in the versioning of Hugo, somewhere between the release I was on (0.55.6) and the latest one I tried (0.73.0 or something… 0.76.3 is out now) and my index.html pages just would not create, and I got many warnings when building… I spent a few hours trying to figure it out, but in the end, I gave up.

I ended up using Chris Salzman’s blob post explaining how he moved from Hugo to WordPress, spent a hour or so tweaking the imported files, built a Docker-Compose file (I will post this somewhere soon, if anyone wants it) and was off to the races. Few tweaks later, a copy of CloudflareD and some DNS tweaks, and everything was back online.

There are some disadvantages to WordPress:

  • Comment Spam
  • Performance
  • Maintenance
  • Security

But even so, I am willing to worry about these and be able to blog easier.

Fixing CID (Caller ID) on incoming calls with 3CX

In a previous post i talked about going all in on VoIP in the house. Its been nearly a year now, and other than some minor issues related to the VoIP Server being turned off accidentally, or a screw up on my end, all is going well. But, one thing i did notice was related to incoming calls and caller Id, specifically on my SIP2SIM card. Essentially, the country code was wrong: for example: Incoming calls from the Virgin Media trunk just show as local numbers (for Dublin, for example, it would so 01xxxxxxx). Using the CID reformatting feature in 3CX, I managed to change this.

All calls that come in starting with 0 are “fixed” and changed to +353 without the 0. When the call comes in though the SIP2SIM card, it does no longer show as a call from the UK, but now shows as a call in Ireland, or where it is coming from, so all the contact details show correctly! Happy days!

Network Update Info April 2019

So, this post has been a long time coming! A load of different things to talk about, so lets get started!

GodBox V3

So, for a long time, I have been thinking about GodBoxV3, the replacement to GodBoxV2. And when planning this, i had some ideas of what it should be:

  • Minimum of 2×16 cores (double godboxv2)
  • About the same RAM, if not more
  • FAST STORAGE!
  • Is able to run my twin 30" 4K monitors
  • Would like 10Gb/s NICs

Well, It finally happened! I got the machine, built it and, well, its impressive! How did i do with specs? Well…

All is good! Photos, more details and benchmarks coming soon… stay tuned!

Finally 10Gb/s Networking!

Since GodBoxV3 had a few 10Gb nics, i needed to upgrade the network to support it. I ended up with a Ubiquiti Networks EdgeSwitch-XG. 16 ports (12 SFP+ and 4 RJ45). The SubperMicro board has 2xRJ45 ports. Due to lack of RJ45 ports, GodBoxV3 is connected to 1, GodBoxV2 is getting a 10Gb card soon, which will be connected to 1 port, and a new Sun Microsystems server (details below) will be getting the last 2… Of the SFP+ ports, 2 are connected to the EdgeSwitch Lite, 2 to the Synology (it got a 10Gig NIC reciently too!) and 2 to the new NAS (again, more details below!)

Good bye Mikrotik, Hello EdgeRouter 4

Since i was going all Ubiquiti gear (Wifi is Unifi gear) i got rid of the old Microtik and replaced it with a Ubiquiti ER4. Happy days! Got some plans for this, more details coming soon…

Updates to BGP Stuff, including IPv6

I lost one VPS in London, but replaced it with a new one from HostUS. I still use Vultr, Packet and VServer.Site as providers too. I am also adding more and more IPv6 stuff too… There is a post on AS204994 explaining a lot of this.

New NAS and more storage!

New NAS got purchased: QNAP TS-932X. I have 5X8TB spinny disks (shucked from 5 WD My Book 8TBs) + 4 X 500GB WD Blue SSDs.

New Servers and cooling updates

Moved lots of stuff around the room… Servers run cooler, and less noisy! happy days! I also got my hands on a very nice looking Sun Server X3-2. Its a Dual Xeon E5 (currently got quad cores, going to upgrade it to 8 cores) and i think its got 16GB ram and 4x300GB SAS Disks. It also has 4X10Gb nics! ESXi will probably go on here!

VMWare in the house

Up till recently, I ran Hyper-V all round. Its still on GodBox V2 and V3 (v1 has a HDD issue, so its off…), but the main VM hosts (the C6100’s) are being migrated to VMWare ESXi… Why? Its a learning exercise… We see how it goes…

So, long update… Any questions, comments, etc… shout!

Adding a Netgear LB2120 to the homelab

A few months back, Three Ireland came out with an LTE broadband offer: Unlimited* LTE broadband for EUR30 per month. It did come with a 18 month contract, but I pulled the trigger and got it as a backup link. I picked this up in the local Three store, and they had a couple of options for modems: a couple of Huawei mobile Wifi hotspots (E5573 or E5577) or a Huawei B525 Modem, which is designed for home use. Alternatively, there was a Sim only option, but given the modem was free with the contact, i went with the B525.

The B525 is not a bad router, don’t get me wrong, but its a Router… i already have a few of them, including my Mikrotik RouterBoard CCR1016-12G, an EdgeRouter POE, a Ubiquiti USG 3 and some virtual ones too… yea, don’t ask… What i wanted was a modem; no WiFi, no routing, and give me a full, non NATted IP to the internet. There was some mention of some of the Huawei modems being able to be put into bridge mode, but i could not find out how to do it… That’s where the Netgear LB2120 comes in.

The LB2120 (there are a few different models, but mine is the Europe edition) has a Micro SIM Slot,a WAN and a LAN port (both GigE), Power in, Power button and 2 inputs for Aerials.

The home page is fairly basic, and gives you all you need: how much data you’ve used, how much you have left, when the data plan resets, etc.

There is also an alerts option, so you can get it to send you an SMS when something happens:

But the relay handy stuff is under Advanced setting/LAN:

You have the option of using it as a router, or using it in a Bridge. Needless to say, i bridged it and got a fully public IP. Currently, mine is hooked up to the second WAN port of the USG, and is currently serving about 30-40% of the traffic on that network (mostly media devices, IOT stuff, etc). Speed wise, its not bad.

Not as good as a hardwired connection, but its only getting 4 bars, and its about 1km to the nearest cell tower. I do want to get some external aerials for it, to see if i can boost the download/upload speed, but we will see. Also, i plan on changing out the Mikrotik for something else… lets see what i end up with!

*Unlimited is mobile network speak for 750GB per month… which does not sound very unlimited to me… but, anyway…

Finally going all in on VoIP

After many years, I am finally trying to move to a proper VoIP system for the house. This post will explain what I am using, how I am setting it up, and some other details you might (or might not) find useful.

First, backstory. I have been interested in VoIP for many years. The first post I wrote about Ito this site was here back in 2012, but I had posted about it on my other site back in 2008. It got my attention years ago as a way of saving money on calls, but in recent times, that has changed a little, mainly because most providers gives you calls for free (my mobile and land lines both come with unlimited calls and with my mobile, I can make them anywhere in Europe). The new reason I am interesting in VoIP is consolidation: I currently have 3 mobile phone numbers, at least 1 landline dedicated to me in the house, plus a work landline. I want to be able to pick up any phone and make a call, and it show as coming from my main number. Or a call comes in and i can pick it up from any of my phones… And that is what i am trying to do here… I (will) have some of it working, but some parts are still missing…

The parts I have (or will have) working are as follows:

  • my land line number in the house is being ported to Virgin Media’s VoIP service. So, thats not stuck in an analog world any more!
  • The house phone now has a VoIP adapter allowing the standard analog phone make VoIP Calls

  • There is a company in the Netherlands called ZeroPlex who have a VoIP over GSM service. Essentially, the SIM they give is connected to your own SIP trunk. You can set it up to allow all calls to go though your SIP trunk, only incoming or only out going. I found their contact though Reddit but they may be able to help if you drop them an email.
  • All VoIP traffic in the house is routed though 3CX.

  • I have a couple of SIP trunks hooked up to 3CX: Virgin Media, Zeroplex (they redirect the NL number is sent over this, and i can make calls though this trunk too), Twilio, which i use for transient numbers, and Sip Discount which offers really cheap calls.
  • Phone wise, i use a Ubiquiti UVP-Executive desk phone, the SIM card, and the 3CX client on mobile (Either iPhone or Android).

So, all in, Im about 50% of the way there… As of the time of this post, the SIM is still in the mail and the phone numbers are not ported to Virgin Media… yet… Tomorrow they should be, and over the next few days there will be some tweaking to get it working correctly… I will probably have some updates over the coming week…

Auto deploying to multiple servers with GitHub and Webhooks

In yesterdays post, i mentioned that i wanted to try get an auto deploy working for this site. It already builds auto-magically using Forestry and puts the static HTML into a Github repo, but i needed to manually update the servers hosting the site… Well, not any more!

using the magic of Github’s Web hooks, the Webhook project and a small piece of bash shell script, i have managed to get this auto deploying…

First, Download the Webhook project (its a Go application, so it works pretty much anywhere). Copy it somewhere on your machine. Next, you need a config. I used the Github sample config from the project site and made tweaks to what script to run and what i was passing in.

next, the script to pull from Github was simple enough:

The repo should already be cloned into the folder, /var/www/localfolder and your web server should be pointing at that also. Then, its just a matter of running the command:

./webhook --hooks github.json --verbose

The --verbose tag gives you lots of info, so its handy for testing. and then your app is running and listening on the default port, 9000.

next, head over to your project on Github and go to settings:

select webhooks and add new web hook

Fill in the required details on the page, and click save.

Github will go out and have a chat with the webhook and verify it can send and receive stuff from the hook. You can see this in the deliveries section:

Clicking on these will show you the headers that were sent, along with the payload, and you can also see the response from your server. Finally, you have the option of re sending the payload, just in case anything goes wrong.

So, there you have it. A complete automated deploy across multiple servers! Any questions, leave a comment below!

[UPDATE] yesterday i mentioned i had to modify the sample that was included on the webhook site. Well, i noticed something this morning. The reason i needed it modified was the trigger rule was checking the header and the reference for the branch, but any time i ran it, it would not trigger… The reason was simple: the webhook app is expecting application/json but i had it set to application/x-www-form-urlencoded which is the default… the webhook app then couldn’t parse it correctly… changing that fixes the problem! happy days!

Moving the site to Hugo

After a LOT of messing with Jekyll, i have finally moved to Hugo! There are a few things that don’t fully work yet, and there will be updates to the site soon enough, but for the moment, I am happy… Its also a LOT faster to build than Jekyll, and less dependencies… Happy days!

[update] I though i should probably update this post with a bit more information around hows its built, why i moved to Hugo, and some more links, etc.

First, it is currently being built using Forestry.io. I use it for both editing the documents (mind you i also use VScode for this too) and ii also builds the site now too. I have 2 Github repos. The main code and the Static HTML. When i check into Github, it it sends a webhook to Forestry, which then pulls the latest code and builds the site, and then checks the resulting files into the static HTML repo. Currently its a manual process to get it on to my server. This site is currently hosted on a server in London with my own AS204994 serving the pages. I plan on adding other servers to the list so its proper any-cast, but currently only 1 is running as a web server currently (there are 4 others, if you include the one in the house: 3 in total hosted on Vultr (LON, FRA, NYC) 1 in DevCapsule and then the house) but that is my next challenge…

Next question is why? Well, over the last few months, its taking longer and longer to build the site using Jekyll… Its also getting more painful to maintain, since you have to mess with dependency hell when updates come out…

then when it finally builds, it takes more than 4 min in most cases…

Now, in all fairness, this is building on a clean machine with no bundle caching, and i did have bundle caching at some stage, but its still takes a long enough (30-40 seconds in some cases) to build compared to Hugo:

the other big advantage is that Hugo arrives in a single EXE (or other binary format for other platforms) and runs on Windows without any extra stuff to install… drop the EXE in your path, cd into the folder, and hugo serve and you have a web server running with your files… if you want to deploy them, run hugo and it builds the project and sticks it in your public folder. Do what you want with it from there! Happy day!

So, finally, some links i have found useful while building this site.

  • The Hugo Documentation site should be your first point of call… Lots of handy stuff in there…
  • Adding Search with Algoia Since the site is static, search doesn’t currently work… but with the help of a hosted service named Algoia you can get around that easily enough…
  • Turn your static site into a JSON API: I am thinking of tweaking the Computers pages (they stopped working as planned when i moved over) and using a JSON API for it… Same with the tools list…
  • Short Codes: Hugo doesn’t really have a plugin model like Jekyll did… but there is still a lot of interesting things you can do… Short codes lets you write custom HTML that gets generated when you put in a particular block of code… Have a look at the link on the bottom of the page to see the code used to generate this page.
  • Cloudinary: Since moving so static sites, i have found images to be a pain in the ass… Found these guys the other day, and they integrate well with Forestry. and their free version works grand for smaller sites…

so, any comments, questions, etc, just leave a message below. and dont forget to subscrbe to the RSS for updates as they come out!

[UPDATE 2]. So, with the help of Github Webhooks and the webhook project, this site auto deploys to 5 different servers and is currently being served on 4… Dub is not fully live yet… happy days!

[UPDATE 3] as mentioned above, I have the Github Webhooks working.

Playing with AMD's Epyc

So, a few days back I got an email from Packet.net about a promotion they and AMD where running. Essentially, they gave me some credit for their service (I am an existing customer) to play with one of their c2.medium machines. A c2.medium comes with an AMD EPYC 7401P which consists of 24 physical cores clocked at 2Gz with an all core boost at 2.8Gz and a max clock of 3Gz, 48 threads, 64GB ECC Memory, 2x120GB SSDs for boot and 2x480GB SSDs for main storage. It also has a 20Gb network link (2x10gb bonded) and can run pretty much any OS you can think of (Windows is not on the list officially, but you can boot off your own ISO, so you could probably get it on there… might not be supported, but it might be possible). All this for $1 per hour! And did i mention they are bare metal machines?

This was the perfect opportunity to play with the new AMD processors. My current and previous generation workstations (GodBoxv1 and Godboxv2) are both running Intel Xeon processors. The machine previous to this, the orignal 1,1 Mac Pro, is also running a Xeon processor. But previous to both of them, my first 2 major workstations ran AMD… the first ran 2 AMD Athlon MP processors. These were old school processors that were single core, and i cant even remember their speeds, but i do know there were 32bit only and the machine maxed out at about 1.25GB RAM (I think technically, it could support 2GB, but some limitations to the BIOS capped it at 1.25GB). The second AMD workstation ran 2 AMD Opterons… again, single core machines, but this time, they ran 64 bit and IIRC maxed out at 8GB ram. This was a limitation of the board, not the processor…

I have been thinking about GodboxV.next, and the AMD processors, specicially the Threadrippers and Epycs, are contenders for the next machine… so, this test allows me to check them out before i buy! Why would i say no?!

So, i spun a box up in New Jersey running Ubuntu 17.10 to play with it, and here are my findings…

First, i ran lscpu on the box to see what i was playing with:

I then ran ‘fdisk -l’ to see what disks i had to play with. on my machine sda and sdb where the 480gb SSDs, sdc was a 120gb that was empty and sdd was the boot drive… i installed the ‘btrfs-progs’ and then formatted sda and sdb as a RAID0 array, which i then mounted to /mnt. this gave me just under 900gb to play with…

So, my first test is the usual test: building the Linux Kernel. I know that this is something that the lads at ServeTheHome do a lot but its something i wanted to try my self… So, first i installed git and build essential, then bison, flex and ncurses-dev, then i cloned Linus’ git repo at git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git. First things first: this machine has a twin 10gb link, a shead load of cores and some very fast storage. How long did it take to clone? it download 1.02 GiB at 35.32MiB/s (about 30 seconds and about 280Mbit/s) and all in, took 2 min 55 seconds to clone. I then ran time make -j 49 to see how long it would take… hmmm… no config file… make menuconfig and just hit save… defaults are grand… time make -j 49 again… and more errors… after a bit of googling, i find the page from Ubuntu showing what i need to do to build the kernel. i follow that… download a LOT more stuff using their instructions, and finally, we get to build… Time: 6 min 12 seconds… this is a FULL default build of the kernel…

Same build on a VM on GodboxV2 (which was given 32GB RAM and 16 thread, so a full Xeon E5-4620) took 8min 27s to clone (8.18MiB/s. or about 64Mbit/s) and 36 min to build… yea, that is 3x less cores, 2x less memory, slower storage (This is on Spinny Disk, not SSD), slower network and it is also a VM VS bare metal, still, to be essentially 6 times slower? interesting… I might, at some stage, boot the machine off a live Linux USB and run some more tests, but not tonight…

So, all this is because i was holding out for the main event… Photo processing… I wanted to do something “real life”, which for me would be development and photo processing… the kernel build gives an idea of a large project build built, the image processing gives an idea of multimedia work…

so, i devised a test: Export a bunch of photos (mix of photos taken on my 5Ds, 5D MKII, iPhone 6 Plus and iPhone 7Plus) that are stored in light room as full and run them though a basic .NET Core app i wrote. the code for the app is available here. The app fully utilises the machine by using multiple threads, and because its 64 bit, it will use as much memory as it can get its hands on. It just does some basic processing: open the file, resize to 1024X1024 and then save it… the 1024X1024 part is just a test… i was a bit under the gun on time, so couldn’t spend as much time working on it as i wanted to…

In total, there was 1546 photos exported, and the total file size was 15Gb. First obstacle was to get them uploaded to the Packet machine, which took a while (my upload speed is currently 40Mbit/s)… Once up, i downloaded a copy of dotnet core 2.0 SDK, cloned the repo with the project, built and ran… and man, its fast! 4 min 43 seconds. And it used all the cores.

Running the same code on GodBoxV2 on the bare metal (no VM this time), i got 17 min 35 seconds of a run… Now, GodBoxV2 has other things running in the back ground, but not that much… I also noticed that, on average, photos were being processed in 3-5 seconds on Epyc, but nearly 13-15s, and sometimes 20 and 25 seconds on GodBoxV2. I also noticed that on Epyc, the dotnet process took nearly 45GB of RAM… to run… On GodBoxV2, it took over 70!

So, there you have it. Some starting tests with these processors. I am well impressed with these processors, and would have no issue getting one for the next GodBox… And with names like Epyc and Threadripper, why not?!

AS204994, Own IP Space and Anycast

So, if you are reading this page, it is being delivered with the magic of Anycast… Well, technically, it was before, since i used Cloudflare, and it still is because of Cloudflare, but also because of my own ASN (As204994), some servers in different locations, and some magic, which i will explain a bit of in this post.

This all started late last year when i got my hands on an ASN and a /48 block of IPv6 addresses. I had been reading stuff about BGP, routing, etc, and decided to go all in. it was quite cheap with the help of HostUS. All in, it was about $50 for the year. As part of the process, i needed 2 upstream providers to say they would accept my announcement. They were Hurricane Electric though their Tunnel Broker service, and Vultr using a few of their VPSs.

After i got my space and ASN, i started to announce the V6 addresses over Vultr and Hurricane Electric, and all was good. I had 2 Vultr servers: 1 in London, UK, and one in New Jersey, USA. I had my home machine announce to HE, and then also link to both Vultr servers using Zerotier. All worked well, but due to some family issues, i never got around to putting it into production… till now.

Those 3 servers now share an IPv6 address on the loop-back port. When you (well, Cloudflare) asks for that IP, the closet (network) with that IP responds, and the NGinx server on that box sends back the contents of the site. This site is hosted on each box, since its fully static, but both AS204994 and TiernanOToole.net are hosted in Ghost, so Dublin (my machine in the house) serves them, and both Lon1 and Nyc1 do proxying. so, most requests from the US are hitting the box in NYC and the ones in Europe share either Dub1 or Lon1. I have some tweaks to do with which servers will be running where, and may add more, but currently its working well.

So, how do you figure out what server responded? Simple. Open your Dev tools on your browser, go to network tab, refresh, and see the response headers for anything on this domain. You should see something like below.

Over the next while, i will be updating tiernanotoole.net with more details on how this works, and more stuff will end up on AS204994.net too. If anyone notices any weird and wonderful issues, shout. If you have more questions, shout.

Blogging on an iPad Pro

So, a few months back I bought myself an iPad Pro. I got a 10.5" with 64GB Storage and the Smart Keyboard. Since then, i have been mostly using it for playing around: watching YouTube, Netflix, surfing on the couch, etc. but i started to wonder how “Pro” this was…so i went and did some testing, and in the end nearly all of this post is being written on it…

first, the good stuff:

  • Microsoft’s Remote Desktop Connection works perfect on the iPad Pro. I have RDPed into machines (with the help of ZeroTier)
  • Panic’s Prompt works well too… again, with ZeroTier, i can SSH into boxes and remote manage them. Handy for checking on docker instances…
  • Panic also have Coda for iOS. its a very nice (if somewhat expensive at $25) editor for the iPad. This post is being written on there now.
  • for Git stuff, i am using an app called Working Copy. Its free, to an extend, but if you need to do stuff like push changes, which is kind of important, then you need to pay a fee.
  • Coda and Working Copy work together with some magic built into Working Copy. It can act as a WebDav server, which Coda can the connect to. you open, edit, change and create docs, and Working Copy keeps note. then you swap to them and checkin. You need to have both apps on the screen at the same time (the docking feature works well for that) since iOS seems to kill some background tasks.
  • Unrelated to blogging, but i have also tried editing photos using Lightroom, and so far so good. I have used the Apple SD Card adapter to download 50MP photos (upwards of 60MB) from my Canon 5Ds quick enough, add them to Lightroom, make some changes and send them to Twitter, Facebook (not Instagram just yet…) and it works well. I have managed to hook it to my Gnarbox too.

Bad Stuff?

  • keyboard takes a bit of getting used to. the Stupid “Global” button to swap keyboards (from
    English to Emoji) is in the place you would expect to find CTRL. and CTRL, Option and Cmd (remember, this is a “Mac” style keyboard) and all shifted one place… I would have preferred if they moved that somewhere else, or removed it altogether…
  • a mouse would be very handy! I have tried pairing a bluetooth mouse to it, and no luck… it would be handy especially for editing documents and code, since touchscreen is “handy” some of the time, but not all the time…

So, there you have it. Blogging on an iPad. Would i give up my daily driver of my Surface Book and GodBoxV2 and just uses an iPad? Hell No… for basic stuff, it works well. Basic photo editing, blogging, surfing, etc, yes. But there is a reason my workstation has 16 processor cores and 160GB RAM: I need it. I have multiple copies of Visual Studio running, SQL Server, multiple VMs running different tasks, multiple web browsers, multiple monitors, etc. the iPad can do a good chunk of stuff, but not the major stuff… not yet. Don’t get me wrong: Word, Excel, Power Point, Outlook. all the major office tools work grand. But Visual Studio? SQL Management Studio? just not there yet…

So, what did i not do on the iPad, and ended up doing on the PC? Well, so far, nothing… using Coda and Working Copy, i wrote the text, previewed it and checked it into GitHub. Then, Prompt is being used to build it on my docker box, check into the static site and push, which will then publish. unless you see an update below, all went as planned and all was done on an iPad…