Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Adding a Netgear LB2120 to the homelab

A few months back, Three Ireland came out with an LTE broadband offer: Unlimited* LTE broadband for EUR30 per month. It did come with a 18 month contract, but I pulled the trigger and got it as a backup link. I picked this up in the local Three store, and they had a couple of options for modems: a couple of Huawei mobile Wifi hotspots (E5573 or E5577) or a Huawei B525 Modem, which is designed for home use. Alternatively, there was a Sim only option, but given the modem was free with the contact, i went with the B525.

The B525 is not a bad router, don’t get me wrong, but its a Router… i already have a few of them, including my Mikrotik RouterBoard CCR1016-12G, an EdgeRouter POE, a Ubiquiti USG 3 and some virtual ones too… yea, don’t ask… What i wanted was a modem; no WiFi, no routing, and give me a full, non NATted IP to the internet. There was some mention of some of the Huawei modems being able to be put into bridge mode, but i could not find out how to do it… That’s where the Netgear LB2120 comes in.

The LB2120 (there are a few different models, but mine is the Europe edition) has a Micro SIM Slot,a WAN and a LAN port (both GigE), Power in, Power button and 2 inputs for Aerials.

The home page is fairly basic, and gives you all you need: how much data you’ve used, how much you have left, when the data plan resets, etc.

There is also an alerts option, so you can get it to send you an SMS when something happens:

But the relay handy stuff is under Advanced setting/LAN:

You have the option of using it as a router, or using it in a Bridge. Needless to say, i bridged it and got a fully public IP. Currently, mine is hooked up to the second WAN port of the USG, and is currently serving about 30-40% of the traffic on that network (mostly media devices, IOT stuff, etc). Speed wise, its not bad.

Not as good as a hardwired connection, but its only getting 4 bars, and its about 1km to the nearest cell tower. I do want to get some external aerials for it, to see if i can boost the download/upload speed, but we will see. Also, i plan on changing out the Mikrotik for something else… lets see what i end up with!

*Unlimited is mobile network speak for 750GB per month… which does not sound very unlimited to me… but, anyway…

Finally going all in on VoIP

After many years, I am finally trying to move to a proper VoIP system for the house. This post will explain what I am using, how I am setting it up, and some other details you might (or might not) find useful.

First, backstory. I have been interested in VoIP for many years. The first post I wrote about Ito this site was here back in 2012, but I had posted about it on my other site back in 2008. It got my attention years ago as a way of saving money on calls, but in recent times, that has changed a little, mainly because most providers gives you calls for free (my mobile and land lines both come with unlimited calls and with my mobile, I can make them anywhere in Europe). The new reason I am interesting in VoIP is consolidation: I currently have 3 mobile phone numbers, at least 1 landline dedicated to me in the house, plus a work landline. I want to be able to pick up any phone and make a call, and it show as coming from my main number. Or a call comes in and i can pick it up from any of my phones… And that is what i am trying to do here… I (will) have some of it working, but some parts are still missing…

The parts I have (or will have) working are as follows:

  • my land line number in the house is being ported to Virgin Media’s VoIP service. So, thats not stuck in an analog world any more!
  • The house phone now has a VoIP adapter allowing the standard analog phone make VoIP Calls

  • There is a company in the Netherlands called ZeroPlex who have a VoIP over GSM service. Essentially, the SIM they give is connected to your own SIP trunk. You can set it up to allow all calls to go though your SIP trunk, only incoming or only out going. I found their contact though Reddit but they may be able to help if you drop them an email.
  • All VoIP traffic in the house is routed though 3CX.

  • I have a couple of SIP trunks hooked up to 3CX: Virgin Media, Zeroplex (they redirect the NL number is sent over this, and i can make calls though this trunk too), Twilio, which i use for transient numbers, and Sip Discount which offers really cheap calls.
  • Phone wise, i use a Ubiquiti UVP-Executive desk phone, the SIM card, and the 3CX client on mobile (Either iPhone or Android).

So, all in, Im about 50% of the way there… As of the time of this post, the SIM is still in the mail and the phone numbers are not ported to Virgin Media… yet… Tomorrow they should be, and over the next few days there will be some tweaking to get it working correctly… I will probably have some updates over the coming week…

Auto deploying to multiple servers with GitHub and Webhooks

In yesterdays post, i mentioned that i wanted to try get an auto deploy working for this site. It already builds auto-magically using Forestry and puts the static HTML into a Github repo, but i needed to manually update the servers hosting the site… Well, not any more!

using the magic of Github’s Web hooks, the Webhook project and a small piece of bash shell script, i have managed to get this auto deploying…

First, Download the Webhook project (its a Go application, so it works pretty much anywhere). Copy it somewhere on your machine. Next, you need a config. I used the Github sample config from the project site and made tweaks to what script to run and what i was passing in.

next, the script to pull from Github was simple enough:

The repo should already be cloned into the folder, /var/www/localfolder and your web server should be pointing at that also. Then, its just a matter of running the command:

./webhook --hooks github.json --verbose

The --verbose tag gives you lots of info, so its handy for testing. and then your app is running and listening on the default port, 9000.

next, head over to your project on Github and go to settings:

select webhooks and add new web hook

Fill in the required details on the page, and click save.

Github will go out and have a chat with the webhook and verify it can send and receive stuff from the hook. You can see this in the deliveries section:

Clicking on these will show you the headers that were sent, along with the payload, and you can also see the response from your server. Finally, you have the option of re sending the payload, just in case anything goes wrong.

So, there you have it. A complete automated deploy across multiple servers! Any questions, leave a comment below!

[UPDATE] yesterday i mentioned i had to modify the sample that was included on the webhook site. Well, i noticed something this morning. The reason i needed it modified was the trigger rule was checking the header and the reference for the branch, but any time i ran it, it would not trigger… The reason was simple: the webhook app is expecting application/json but i had it set to application/x-www-form-urlencoded which is the default… the webhook app then couldn’t parse it correctly… changing that fixes the problem! happy days!

Moving the site to Hugo

After a LOT of messing with Jekyll, i have finally moved to Hugo! There are a few things that don’t fully work yet, and there will be updates to the site soon enough, but for the moment, I am happy… Its also a LOT faster to build than Jekyll, and less dependencies… Happy days!

[update] I though i should probably update this post with a bit more information around hows its built, why i moved to Hugo, and some more links, etc.

First, it is currently being built using Forestry.io. I use it for both editing the documents (mind you i also use VScode for this too) and ii also builds the site now too. I have 2 Github repos. The main code and the Static HTML. When i check into Github, it it sends a webhook to Forestry, which then pulls the latest code and builds the site, and then checks the resulting files into the static HTML repo. Currently its a manual process to get it on to my server. This site is currently hosted on a server in London with my own AS204994 serving the pages. I plan on adding other servers to the list so its proper any-cast, but currently only 1 is running as a web server currently (there are 4 others, if you include the one in the house: 3 in total hosted on Vultr (LON, FRA, NYC) 1 in DevCapsule and then the house) but that is my next challenge…

Next question is why? Well, over the last few months, its taking longer and longer to build the site using Jekyll… Its also getting more painful to maintain, since you have to mess with dependency hell when updates come out…

then when it finally builds, it takes more than 4 min in most cases…

Now, in all fairness, this is building on a clean machine with no bundle caching, and i did have bundle caching at some stage, but its still takes a long enough (30-40 seconds in some cases) to build compared to Hugo:

the other big advantage is that Hugo arrives in a single EXE (or other binary format for other platforms) and runs on Windows without any extra stuff to install… drop the EXE in your path, cd into the folder, and hugo serve and you have a web server running with your files… if you want to deploy them, run hugo and it builds the project and sticks it in your public folder. Do what you want with it from there! Happy day!

So, finally, some links i have found useful while building this site.

  • The Hugo Documentation site should be your first point of call… Lots of handy stuff in there…
  • Adding Search with Algoia Since the site is static, search doesn’t currently work… but with the help of a hosted service named Algoia you can get around that easily enough…
  • Turn your static site into a JSON API: I am thinking of tweaking the Computers pages (they stopped working as planned when i moved over) and using a JSON API for it… Same with the tools list…
  • Short Codes: Hugo doesn’t really have a plugin model like Jekyll did… but there is still a lot of interesting things you can do… Short codes lets you write custom HTML that gets generated when you put in a particular block of code… Have a look at the link on the bottom of the page to see the code used to generate this page.
  • Cloudinary: Since moving so static sites, i have found images to be a pain in the ass… Found these guys the other day, and they integrate well with Forestry. and their free version works grand for smaller sites…

so, any comments, questions, etc, just leave a message below. and dont forget to subscrbe to the RSS for updates as they come out!

[UPDATE 2]. So, with the help of Github Webhooks and the webhook project, this site auto deploys to 5 different servers and is currently being served on 4… Dub is not fully live yet… happy days!

[UPDATE 3] as mentioned above, I have the Github Webhooks working.

Playing with AMD's Epyc

So, a few days back I got an email from Packet.net about a promotion they and AMD where running. Essentially, they gave me some credit for their service (I am an existing customer) to play with one of their c2.medium machines. A c2.medium comes with an AMD EPYC 7401P which consists of 24 physical cores clocked at 2Gz with an all core boost at 2.8Gz and a max clock of 3Gz, 48 threads, 64GB ECC Memory, 2x120GB SSDs for boot and 2x480GB SSDs for main storage. It also has a 20Gb network link (2x10gb bonded) and can run pretty much any OS you can think of (Windows is not on the list officially, but you can boot off your own ISO, so you could probably get it on there… might not be supported, but it might be possible). All this for $1 per hour! And did i mention they are bare metal machines?

This was the perfect opportunity to play with the new AMD processors. My current and previous generation workstations (GodBoxv1 and Godboxv2) are both running Intel Xeon processors. The machine previous to this, the orignal 1,1 Mac Pro, is also running a Xeon processor. But previous to both of them, my first 2 major workstations ran AMD… the first ran 2 AMD Athlon MP processors. These were old school processors that were single core, and i cant even remember their speeds, but i do know there were 32bit only and the machine maxed out at about 1.25GB RAM (I think technically, it could support 2GB, but some limitations to the BIOS capped it at 1.25GB). The second AMD workstation ran 2 AMD Opterons… again, single core machines, but this time, they ran 64 bit and IIRC maxed out at 8GB ram. This was a limitation of the board, not the processor…

I have been thinking about GodboxV.next, and the AMD processors, specicially the Threadrippers and Epycs, are contenders for the next machine… so, this test allows me to check them out before i buy! Why would i say no?!

So, i spun a box up in New Jersey running Ubuntu 17.10 to play with it, and here are my findings…

First, i ran lscpu on the box to see what i was playing with:

I then ran ‘fdisk -l’ to see what disks i had to play with. on my machine sda and sdb where the 480gb SSDs, sdc was a 120gb that was empty and sdd was the boot drive… i installed the ‘btrfs-progs’ and then formatted sda and sdb as a RAID0 array, which i then mounted to /mnt. this gave me just under 900gb to play with…

So, my first test is the usual test: building the Linux Kernel. I know that this is something that the lads at ServeTheHome do a lot but its something i wanted to try my self… So, first i installed git and build essential, then bison, flex and ncurses-dev, then i cloned Linus’ git repo at git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git. First things first: this machine has a twin 10gb link, a shead load of cores and some very fast storage. How long did it take to clone? it download 1.02 GiB at 35.32MiB/s (about 30 seconds and about 280Mbit/s) and all in, took 2 min 55 seconds to clone. I then ran time make -j 49 to see how long it would take… hmmm… no config file… make menuconfig and just hit save… defaults are grand… time make -j 49 again… and more errors… after a bit of googling, i find the page from Ubuntu showing what i need to do to build the kernel. i follow that… download a LOT more stuff using their instructions, and finally, we get to build… Time: 6 min 12 seconds… this is a FULL default build of the kernel…

Same build on a VM on GodboxV2 (which was given 32GB RAM and 16 thread, so a full Xeon E5-4620) took 8min 27s to clone (8.18MiB/s. or about 64Mbit/s) and 36 min to build… yea, that is 3x less cores, 2x less memory, slower storage (This is on Spinny Disk, not SSD), slower network and it is also a VM VS bare metal, still, to be essentially 6 times slower? interesting… I might, at some stage, boot the machine off a live Linux USB and run some more tests, but not tonight…

So, all this is because i was holding out for the main event… Photo processing… I wanted to do something “real life”, which for me would be development and photo processing… the kernel build gives an idea of a large project build built, the image processing gives an idea of multimedia work…

so, i devised a test: Export a bunch of photos (mix of photos taken on my 5Ds, 5D MKII, iPhone 6 Plus and iPhone 7Plus) that are stored in light room as full and run them though a basic .NET Core app i wrote. the code for the app is available here. The app fully utilises the machine by using multiple threads, and because its 64 bit, it will use as much memory as it can get its hands on. It just does some basic processing: open the file, resize to 1024X1024 and then save it… the 1024X1024 part is just a test… i was a bit under the gun on time, so couldn’t spend as much time working on it as i wanted to…

In total, there was 1546 photos exported, and the total file size was 15Gb. First obstacle was to get them uploaded to the Packet machine, which took a while (my upload speed is currently 40Mbit/s)… Once up, i downloaded a copy of dotnet core 2.0 SDK, cloned the repo with the project, built and ran… and man, its fast! 4 min 43 seconds. And it used all the cores.

Running the same code on GodBoxV2 on the bare metal (no VM this time), i got 17 min 35 seconds of a run… Now, GodBoxV2 has other things running in the back ground, but not that much… I also noticed that, on average, photos were being processed in 3-5 seconds on Epyc, but nearly 13-15s, and sometimes 20 and 25 seconds on GodBoxV2. I also noticed that on Epyc, the dotnet process took nearly 45GB of RAM… to run… On GodBoxV2, it took over 70!

So, there you have it. Some starting tests with these processors. I am well impressed with these processors, and would have no issue getting one for the next GodBox… And with names like Epyc and Threadripper, why not?!

AS204994, Own IP Space and Anycast

So, if you are reading this page, it is being delivered with the magic of Anycast… Well, technically, it was before, since i used Cloudflare, and it still is because of Cloudflare, but also because of my own ASN (As204994), some servers in different locations, and some magic, which i will explain a bit of in this post.

This all started late last year when i got my hands on an ASN and a /48 block of IPv6 addresses. I had been reading stuff about BGP, routing, etc, and decided to go all in. it was quite cheap with the help of HostUS. All in, it was about $50 for the year. As part of the process, i needed 2 upstream providers to say they would accept my announcement. They were Hurricane Electric though their Tunnel Broker service, and Vultr using a few of their VPSs.

After i got my space and ASN, i started to announce the V6 addresses over Vultr and Hurricane Electric, and all was good. I had 2 Vultr servers: 1 in London, UK, and one in New Jersey, USA. I had my home machine announce to HE, and then also link to both Vultr servers using Zerotier. All worked well, but due to some family issues, i never got around to putting it into production… till now.

Those 3 servers now share an IPv6 address on the loop-back port. When you (well, Cloudflare) asks for that IP, the closet (network) with that IP responds, and the NGinx server on that box sends back the contents of the site. This site is hosted on each box, since its fully static, but both AS204994 and TiernanOToole.net are hosted in Ghost, so Dublin (my machine in the house) serves them, and both Lon1 and Nyc1 do proxying. so, most requests from the US are hitting the box in NYC and the ones in Europe share either Dub1 or Lon1. I have some tweaks to do with which servers will be running where, and may add more, but currently its working well.

So, how do you figure out what server responded? Simple. Open your Dev tools on your browser, go to network tab, refresh, and see the response headers for anything on this domain. You should see something like below.

Over the next while, i will be updating tiernanotoole.net with more details on how this works, and more stuff will end up on AS204994.net too. If anyone notices any weird and wonderful issues, shout. If you have more questions, shout.

Blogging on an iPad Pro

So, a few months back I bought myself an iPad Pro. I got a 10.5" with 64GB Storage and the Smart Keyboard. Since then, i have been mostly using it for playing around: watching YouTube, Netflix, surfing on the couch, etc. but i started to wonder how “Pro” this was…so i went and did some testing, and in the end nearly all of this post is being written on it…

first, the good stuff:

  • Microsoft’s Remote Desktop Connection works perfect on the iPad Pro. I have RDPed into machines (with the help of ZeroTier)
  • Panic’s Prompt works well too… again, with ZeroTier, i can SSH into boxes and remote manage them. Handy for checking on docker instances…
  • Panic also have Coda for iOS. its a very nice (if somewhat expensive at $25) editor for the iPad. This post is being written on there now.
  • for Git stuff, i am using an app called Working Copy. Its free, to an extend, but if you need to do stuff like push changes, which is kind of important, then you need to pay a fee.
  • Coda and Working Copy work together with some magic built into Working Copy. It can act as a WebDav server, which Coda can the connect to. you open, edit, change and create docs, and Working Copy keeps note. then you swap to them and checkin. You need to have both apps on the screen at the same time (the docking feature works well for that) since iOS seems to kill some background tasks.
  • Unrelated to blogging, but i have also tried editing photos using Lightroom, and so far so good. I have used the Apple SD Card adapter to download 50MP photos (upwards of 60MB) from my Canon 5Ds quick enough, add them to Lightroom, make some changes and send them to Twitter, Facebook (not Instagram just yet…) and it works well. I have managed to hook it to my Gnarbox too.

Bad Stuff?

  • keyboard takes a bit of getting used to. the Stupid “Global” button to swap keyboards (from
    English to Emoji) is in the place you would expect to find CTRL. and CTRL, Option and Cmd (remember, this is a “Mac” style keyboard) and all shifted one place… I would have preferred if they moved that somewhere else, or removed it altogether…
  • a mouse would be very handy! I have tried pairing a bluetooth mouse to it, and no luck… it would be handy especially for editing documents and code, since touchscreen is “handy” some of the time, but not all the time…

So, there you have it. Blogging on an iPad. Would i give up my daily driver of my Surface Book and GodBoxV2 and just uses an iPad? Hell No… for basic stuff, it works well. Basic photo editing, blogging, surfing, etc, yes. But there is a reason my workstation has 16 processor cores and 160GB RAM: I need it. I have multiple copies of Visual Studio running, SQL Server, multiple VMs running different tasks, multiple web browsers, multiple monitors, etc. the iPad can do a good chunk of stuff, but not the major stuff… not yet. Don’t get me wrong: Word, Excel, Power Point, Outlook. all the major office tools work grand. But Visual Studio? SQL Management Studio? just not there yet…

So, what did i not do on the iPad, and ended up doing on the PC? Well, so far, nothing… using Coda and Working Copy, i wrote the text, previewed it and checked it into GitHub. Then, Prompt is being used to build it on my docker box, check into the static site and push, which will then publish. unless you see an update below, all went as planned and all was done on an iPad…

New Backup Plans

So, a few weeks back, CrashPlan, one of my chosen backup services, decided to kill their consumer backup plans. Which made me have to rethink my backup plan for the house…

Note: This is how I am backing up files, and may or may not work for you. Some of this is already in “production” as of today, but others are planned… Any questions, comments, etc, leave them in the comments.

So, first, as mentioned above, CrashPlan’s consumer backup options are now dead. They are giving you the option of either upgrading to their Small Business Plan for free till the end of your current contract + 2 months, or moving to Carbonite. For me, i just moved to their Small Business option, since its free, and meant that most of my backups just migrated over… i did not, how ever, look too much into Carbonite.

Just checking their site now, i would have to go for a minimum of the Plus plan, and that only allows me to backup 1 machine… I have 2 (currently) that get backed up… and probably more at a later date… and, given their yearly price for Plus is $75 per year, thats $150 for 2 current machines, and an extra $75 per machine after… Granted, they are saying its unlimited storage. but the extra fee per machine puts it out of contention…

For CrashPlans Small Business Plan, their plan is $10 per machine per month… So, my 2 machines would cost $240 per year to backup. They are giving me the next few months for free, and then for the first year, they are offering a discount of 75%, but we will see what happens when renewal comes along…

So, where does that leave me? Well, its looking like i will be using a mix of BackBlaze and Hubic. The current idea is as follows:

  • GodBoxV2 and the Mac Mini both have Drobos plugged into them: the GodBoxV2 has a Drobo 5D, and the mini has a Drobo mini. There is also a Drobo FS on the network. I know they are not backup solutions, but its better than having stuff on single drives…

  • For Windows boxes, I am backing up files to the cloud using Duplicati. These are backed up to both Hubic and BackBlaze B2. Photos and importing files are backed up to both, and stuff i want backed up, but not in 2 places, is just backed up on Hubic.

  • For my Mac mini, i use a a mix of Duplicati and Arq Backup. Arq got B2 support recently, so its very handy for that.

  • For some of my Linux Boxes, i use Duplicity. You may need to walk though the steps i have here to get it working with Hubic, and it also works with B2.

Some further notes:

  • Hubic are charging about EUR5 a month for 10Tb of storage. If you refer people, that can go up. I have maxed mine out at 12.5Tb.

  • Hubic is limited to 10mb/s for uploads and downloads. I never got anywhere close to that speed with CrashPlan (I’m based in Europe, they are in the US, so speed of light and what have you… maxed most of the time at 4ish). That being said, even those its “limited” to 10mb/s, i have seen higher.,.. might be 10 per thread…

  • I did, for a while, use the BackBlaze app and service. I just didn’t renew because of the move to B2. If you don’t want to mess with stuff, their App works great.And if you use this referal link you get a free month… Also, its FAST AF! it actually manged to max out my upload connection to their server! wont complain about that!

  • B2 is cheap. 0.5c per gig stored… So, it works out at about $5 per TB per month, minus your usual bandwidth fees…

  • Duplicati and Duplicity both have options to send emails on success or failure, or the like. I recommend having the emails on failure turned on at a minimum… also, on headless boxes, like my Linux box, i recommend checking the backup every week…

So, there we have it. Any questions?

Testing Forestry

So, as you probably know, this site is built with Jekyll. Jekyll is a Static Site Generator, basically taking an input of a load of text files (see the source repo for this site on Github here) and generating a load more HTML (the static HTML is hosted on Github here, which auto publishes to Azure App Service).

In previous posts, i have talked about using the likes of Visual Studio Code and Mark Down Monster to build the site. Well, a few days back, i found Forestry.io. Its a web application which, in my case, is linked with my GitHub repo (the Jekyll source one) and allows me to make changes to the code easily. Because the way i build my site is a little different, i manually build the site and push to the destination GitHub project, but they have features allowing you to push directly to SFTP or FTP servers, GitHub, or some other options.

The interface is nice and easy to use, and you can use drafts, etc. Mind you, even with drafts, because files are written into a public repo, they are not fully private… I suppose I could just make the source repo private… Anyway, they are free for single user sites (like this) or they have paid plans for teams (say, your business blog with more than 1 user updating it, for example).

VSCode and Markdown Monster with Powershell

A few years back, i created a post showing you how to add an Alias to PowerShell to easily start Sublime Text from a PowerShell command line . This worked well, but this is 2017 (that post is from 2012!) and my daily text editor has changed. I have moved to Visual Studio Code for most of my daily work. It works well 95% of the time. I still use Visual Studio Pro for C# Development, but for quick fixes and work on, say Go or smaller edits, Code is great. For blogging, on the other hand, I am trying out MarkDown Monster but code still has some nice features. We will see how tests go.

Anyway, to upgrade the post, and to be able to use VS Code and MarkDown Monster from your PowerShell command line, I added the following:

Set-Alias code "C:\Program Files (x86)\Microsoft VS Code\Code.exe"
Set-Alias mm "C:\Program Files (x86)\Markdown Monster\MarkdownMonster.exe"

You can either run this each time you open PowerShell (a bit of pain) or you can add it to your Microsoft.PowerShell_profile.ps1 which lives in your Document\WindowsPowerShell folder. If you dont have one, just create the file, add that piece of text, and next time you open PowerShell, you are good to go.

In my case, i am actually in the Visual Studio Code Insiders group, so my alias is:

Set-Alias code "C:\Program Files (x86)\Microsoft VS Code Insiders\Code - Insiders.exe"

When these are set (and you have restarted your PowerShell windows), you can now run the following commands:

code filename.txt
code folder
mm filename.txt

VSCode will open either files or folders, but it seems Markdown Monster only opens files.