Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Monthly Archives July 2018

Finally going all in on VoIP

After many years, I am finally trying to move to a proper VoIP system for the house. This post will explain what I am using, how I am setting it up, and some other details you might (or might not) find useful.

First, backstory. I have been interested in VoIP for many years. The first post I wrote about Ito this site was here back in 2012, but I had posted about it on my other site back in 2008. It got my attention years ago as a way of saving money on calls, but in recent times, that has changed a little, mainly because most providers gives you calls for free (my mobile and land lines both come with unlimited calls and with my mobile, I can make them anywhere in Europe). The new reason I am interesting in VoIP is consolidation: I currently have 3 mobile phone numbers, at least 1 landline dedicated to me in the house, plus a work landline. I want to be able to pick up any phone and make a call, and it show as coming from my main number. Or a call comes in and i can pick it up from any of my phones… And that is what i am trying to do here… I (will) have some of it working, but some parts are still missing…

The parts I have (or will have) working are as follows:

  • my land line number in the house is being ported to Virgin Media’s VoIP service. So, thats not stuck in an analog world any more!
  • The house phone now has a VoIP adapter allowing the standard analog phone make VoIP Calls

  • There is a company in the Netherlands called ZeroPlex who have a VoIP over GSM service. Essentially, the SIM they give is connected to your own SIP trunk. You can set it up to allow all calls to go though your SIP trunk, only incoming or only out going. I found their contact though Reddit but they may be able to help if you drop them an email.
  • All VoIP traffic in the house is routed though 3CX.

  • I have a couple of SIP trunks hooked up to 3CX: Virgin Media, Zeroplex (they redirect the NL number is sent over this, and i can make calls though this trunk too), Twilio, which i use for transient numbers, and Sip Discount which offers really cheap calls.
  • Phone wise, i use a Ubiquiti UVP-Executive desk phone, the SIM card, and the 3CX client on mobile (Either iPhone or Android).

So, all in, Im about 50% of the way there… As of the time of this post, the SIM is still in the mail and the phone numbers are not ported to Virgin Media… yet… Tomorrow they should be, and over the next few days there will be some tweaking to get it working correctly… I will probably have some updates over the coming week…

Auto deploying to multiple servers with GitHub and Webhooks

In yesterdays post, i mentioned that i wanted to try get an auto deploy working for this site. It already builds auto-magically using Forestry and puts the static HTML into a Github repo, but i needed to manually update the servers hosting the site… Well, not any more!

using the magic of Github’s Web hooks, the Webhook project and a small piece of bash shell script, i have managed to get this auto deploying…

First, Download the Webhook project (its a Go application, so it works pretty much anywhere). Copy it somewhere on your machine. Next, you need a config. I used the Github sample config from the project site and made tweaks to what script to run and what i was passing in.

next, the script to pull from Github was simple enough:

The repo should already be cloned into the folder, /var/www/localfolder and your web server should be pointing at that also. Then, its just a matter of running the command:

./webhook --hooks github.json --verbose

The --verbose tag gives you lots of info, so its handy for testing. and then your app is running and listening on the default port, 9000.

next, head over to your project on Github and go to settings:

select webhooks and add new web hook

Fill in the required details on the page, and click save.

Github will go out and have a chat with the webhook and verify it can send and receive stuff from the hook. You can see this in the deliveries section:

Clicking on these will show you the headers that were sent, along with the payload, and you can also see the response from your server. Finally, you have the option of re sending the payload, just in case anything goes wrong.

So, there you have it. A complete automated deploy across multiple servers! Any questions, leave a comment below!

[UPDATE] yesterday i mentioned i had to modify the sample that was included on the webhook site. Well, i noticed something this morning. The reason i needed it modified was the trigger rule was checking the header and the reference for the branch, but any time i ran it, it would not trigger… The reason was simple: the webhook app is expecting application/json but i had it set to application/x-www-form-urlencoded which is the default… the webhook app then couldn’t parse it correctly… changing that fixes the problem! happy days!

Moving the site to Hugo

After a LOT of messing with Jekyll, i have finally moved to Hugo! There are a few things that don’t fully work yet, and there will be updates to the site soon enough, but for the moment, I am happy… Its also a LOT faster to build than Jekyll, and less dependencies… Happy days!

[update] I though i should probably update this post with a bit more information around hows its built, why i moved to Hugo, and some more links, etc.

First, it is currently being built using Forestry.io. I use it for both editing the documents (mind you i also use VScode for this too) and ii also builds the site now too. I have 2 Github repos. The main code and the Static HTML. When i check into Github, it it sends a webhook to Forestry, which then pulls the latest code and builds the site, and then checks the resulting files into the static HTML repo. Currently its a manual process to get it on to my server. This site is currently hosted on a server in London with my own AS204994 serving the pages. I plan on adding other servers to the list so its proper any-cast, but currently only 1 is running as a web server currently (there are 4 others, if you include the one in the house: 3 in total hosted on Vultr (LON, FRA, NYC) 1 in DevCapsule and then the house) but that is my next challenge…

Next question is why? Well, over the last few months, its taking longer and longer to build the site using Jekyll… Its also getting more painful to maintain, since you have to mess with dependency hell when updates come out…

then when it finally builds, it takes more than 4 min in most cases…

Now, in all fairness, this is building on a clean machine with no bundle caching, and i did have bundle caching at some stage, but its still takes a long enough (30-40 seconds in some cases) to build compared to Hugo:

the other big advantage is that Hugo arrives in a single EXE (or other binary format for other platforms) and runs on Windows without any extra stuff to install… drop the EXE in your path, cd into the folder, and hugo serve and you have a web server running with your files… if you want to deploy them, run hugo and it builds the project and sticks it in your public folder. Do what you want with it from there! Happy day!

So, finally, some links i have found useful while building this site.

  • The Hugo Documentation site should be your first point of call… Lots of handy stuff in there…
  • Adding Search with Algoia Since the site is static, search doesn’t currently work… but with the help of a hosted service named Algoia you can get around that easily enough…
  • Turn your static site into a JSON API: I am thinking of tweaking the Computers pages (they stopped working as planned when i moved over) and using a JSON API for it… Same with the tools list…
  • Short Codes: Hugo doesn’t really have a plugin model like Jekyll did… but there is still a lot of interesting things you can do… Short codes lets you write custom HTML that gets generated when you put in a particular block of code… Have a look at the link on the bottom of the page to see the code used to generate this page.
  • Cloudinary: Since moving so static sites, i have found images to be a pain in the ass… Found these guys the other day, and they integrate well with Forestry. and their free version works grand for smaller sites…

so, any comments, questions, etc, just leave a message below. and dont forget to subscrbe to the RSS for updates as they come out!

[UPDATE 2]. So, with the help of Github Webhooks and the webhook project, this site auto deploys to 5 different servers and is currently being served on 4… Dub is not fully live yet… happy days!

[UPDATE 3] as mentioned above, I have the Github Webhooks working.

Playing with AMD's Epyc

So, a few days back I got an email from Packet.net about a promotion they and AMD where running. Essentially, they gave me some credit for their service (I am an existing customer) to play with one of their c2.medium machines. A c2.medium comes with an AMD EPYC 7401P which consists of 24 physical cores clocked at 2Gz with an all core boost at 2.8Gz and a max clock of 3Gz, 48 threads, 64GB ECC Memory, 2x120GB SSDs for boot and 2x480GB SSDs for main storage. It also has a 20Gb network link (2x10gb bonded) and can run pretty much any OS you can think of (Windows is not on the list officially, but you can boot off your own ISO, so you could probably get it on there… might not be supported, but it might be possible). All this for $1 per hour! And did i mention they are bare metal machines?

This was the perfect opportunity to play with the new AMD processors. My current and previous generation workstations (GodBoxv1 and Godboxv2) are both running Intel Xeon processors. The machine previous to this, the orignal 1,1 Mac Pro, is also running a Xeon processor. But previous to both of them, my first 2 major workstations ran AMD… the first ran 2 AMD Athlon MP processors. These were old school processors that were single core, and i cant even remember their speeds, but i do know there were 32bit only and the machine maxed out at about 1.25GB RAM (I think technically, it could support 2GB, but some limitations to the BIOS capped it at 1.25GB). The second AMD workstation ran 2 AMD Opterons… again, single core machines, but this time, they ran 64 bit and IIRC maxed out at 8GB ram. This was a limitation of the board, not the processor…

I have been thinking about GodboxV.next, and the AMD processors, specicially the Threadrippers and Epycs, are contenders for the next machine… so, this test allows me to check them out before i buy! Why would i say no?!

So, i spun a box up in New Jersey running Ubuntu 17.10 to play with it, and here are my findings…

First, i ran lscpu on the box to see what i was playing with:

I then ran ‘fdisk -l’ to see what disks i had to play with. on my machine sda and sdb where the 480gb SSDs, sdc was a 120gb that was empty and sdd was the boot drive… i installed the ‘btrfs-progs’ and then formatted sda and sdb as a RAID0 array, which i then mounted to /mnt. this gave me just under 900gb to play with…

So, my first test is the usual test: building the Linux Kernel. I know that this is something that the lads at ServeTheHome do a lot but its something i wanted to try my self… So, first i installed git and build essential, then bison, flex and ncurses-dev, then i cloned Linus’ git repo at git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git. First things first: this machine has a twin 10gb link, a shead load of cores and some very fast storage. How long did it take to clone? it download 1.02 GiB at 35.32MiB/s (about 30 seconds and about 280Mbit/s) and all in, took 2 min 55 seconds to clone. I then ran time make -j 49 to see how long it would take… hmmm… no config file… make menuconfig and just hit save… defaults are grand… time make -j 49 again… and more errors… after a bit of googling, i find the page from Ubuntu showing what i need to do to build the kernel. i follow that… download a LOT more stuff using their instructions, and finally, we get to build… Time: 6 min 12 seconds… this is a FULL default build of the kernel…

Same build on a VM on GodboxV2 (which was given 32GB RAM and 16 thread, so a full Xeon E5-4620) took 8min 27s to clone (8.18MiB/s. or about 64Mbit/s) and 36 min to build… yea, that is 3x less cores, 2x less memory, slower storage (This is on Spinny Disk, not SSD), slower network and it is also a VM VS bare metal, still, to be essentially 6 times slower? interesting… I might, at some stage, boot the machine off a live Linux USB and run some more tests, but not tonight…

So, all this is because i was holding out for the main event… Photo processing… I wanted to do something “real life”, which for me would be development and photo processing… the kernel build gives an idea of a large project build built, the image processing gives an idea of multimedia work…

so, i devised a test: Export a bunch of photos (mix of photos taken on my 5Ds, 5D MKII, iPhone 6 Plus and iPhone 7Plus) that are stored in light room as full and run them though a basic .NET Core app i wrote. the code for the app is available here. The app fully utilises the machine by using multiple threads, and because its 64 bit, it will use as much memory as it can get its hands on. It just does some basic processing: open the file, resize to 1024X1024 and then save it… the 1024X1024 part is just a test… i was a bit under the gun on time, so couldn’t spend as much time working on it as i wanted to…

In total, there was 1546 photos exported, and the total file size was 15Gb. First obstacle was to get them uploaded to the Packet machine, which took a while (my upload speed is currently 40Mbit/s)… Once up, i downloaded a copy of dotnet core 2.0 SDK, cloned the repo with the project, built and ran… and man, its fast! 4 min 43 seconds. And it used all the cores.

Running the same code on GodBoxV2 on the bare metal (no VM this time), i got 17 min 35 seconds of a run… Now, GodBoxV2 has other things running in the back ground, but not that much… I also noticed that, on average, photos were being processed in 3-5 seconds on Epyc, but nearly 13-15s, and sometimes 20 and 25 seconds on GodBoxV2. I also noticed that on Epyc, the dotnet process took nearly 45GB of RAM… to run… On GodBoxV2, it took over 70!

So, there you have it. Some starting tests with these processors. I am well impressed with these processors, and would have no issue getting one for the next GodBox… And with names like Epyc and Threadripper, why not?!