Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts in Programming

Continuous Integration and Blogging

Back in August of 2012, I started this site using Git and Jekyll. I hosted most of it at home, pushing to a server in house. Then, a few years back, I moved to pushing the files to Amazon S3 and had Cloud Front doing distribution. The last moved had me hosting the files in NearlyFreeSpeech.NET and Cloud Flare doing the content distribution… Well, that changed over the last few days… again…

Currently, you are still hitting Cloud Flare when you hit this site, but the backend is back to being hosted on Amazon S3. But the files getting to S3 is more interesting now. All the “code” for this site is up on a GitHub repo and any time something is checked in, Travis CI kicks off, builds the files using Jekyll and pushes to S3 using s3_website. All my “private” keys are hidden in Travis-CI, so no one can access them but me. This makes updating the site a lot easier. I can create a file in GitHub directly, preview it, make changes, etc., and then check in. Once checked in, Travis kicks off, builds and deploys. All Good!

It also means that if “bugs” are found on the site (by you, my dear reader), or if you have queries for some things, a “bug report” can be opened on the issues log. I already have a bug for making the site faster… Anything else you want me to change?

Announcing B2 Uploader and Hubic Testing 2.0

I have 2 new side projects to announce on the site today. First has been running for a while (first check-in was December 28th) and it’s called B2Uploader. Its a fairly simple Windows application to upload files to BackBlaze B2. If you are not familiar with BackBlaze, they provide unlimited backup storage for the low price of a fiver a month. They are the guys who design the BackBlaze storage pods (I want one, by the way!) that allow them to provide unlimited storage for the fiver a month (I currently backup over 4Tb to them!), and late last year, they started offing B2 which is a storage platform on their pods, and it has a (somewhat) easy to use API. AND ITS CHEAP! half a cent, up 0.5c, per gig stored per month! That’s crazy cheap!

B2Uploader uses the B2 API to upload files (it could do more, but currently, as the name suggests, its upload only). Its quite simple, and all the code is available. More stuff coming over the next few weeks. some of the usual badges for open source applications are below. if you want to shout at me, shout in the Gitter chatroom and I will reply. You can see the latest builds over on travis-ci, and the latest releases are available on GitHub.

Join the chat at https://Gitter.im/tiernano/b2uploader

Build Status

The second project is still in the planning phase, and it’s an update to an older project I was working on called HubicTesting. The name is very cleverly called, wait for it… HubicTesting 2.0! I have mentioned Hubic before here. Cheap (about a tenner a month) for lots of storage (10TB!) but an odd API.. It uses Swift for storage, but has a weird(ish) API for authentication. Anyway, more details will be on the site once I write it up.

So, anyone needing to upload files to B2, check out B2Uploader. Want to work with stuff on Hubic, check out HubicTesting 2.0. Any questions, drop me a mail or find me on the Gitter channel. Have a good one!

Using git and Route53 together

so, earlier on today, i was talking about using Git with a DNS service called LuaDNS to update your DNS records. Well, thing is, i have 30+ domains registered, and of them about 25 are hosted on Amazon’s Route53. So, moving ALL of them seems, well at the moment, excessive… So, i went digging…

there is a tool called cli53 which will allow you to manage route53 objects from the command line. It can also export your zones to BIND format and then re-import them if you have made changes… This all came out of a blog post by the guys and gals at netguru who showed how they integrate their DNS records with their Continuous Integration… Now, i have not gotten to that stage, just yet, but its only 1 step more down the road… but I don’t have my zones in bind format… So, how do i do that?

I tweaked their block of ruby code (first time playing with ruby, be gentle with me) and got the following:

essentially, it runs cli53 (you may need to change your path) and then creates .bind files for each zone.

then, using their code below, you can re-import them to Route53:

i have exported all mine, added them to git and done some testing… All seems to be in order… once i do some tweaks, i can get that CI piece working and it should be all magic…

Git Push DNS

There are now a lot of services that have “git push” options available… you can build websites with
Azure and Github, books using ShareLaTeX and now, DNS using LuaDNS. I have one zone
running at the moment (tiernanotoole.net) and you can see the DNS records on github here. I am
tempted at moving other records over soon… but i am currently on Amazon Route53 and 1: its works, so
dont break it, and 2, not sure how to bulk export records from Route53 to Bind or Lua format.

[update] 2 quick updates: 1) their free account, which is what i am using, allows 3 domains and 30 host
records. they also charge less than Route53:

  • route 53 for 10 domains per year cost 50c per domain (first 25) per month, then query charges. total,
    about $60 + queries (@40c per million).
  • luadns cost $29 a year for 10 domains, 5 million (ish) querys a month and 500 host records…

I think i have nearly 30 domain on AWS… so, their $59 a year package, which include 30 domains, would
probably save me money…

and 2) i forgot about one of those git push services… DeveloperMail is a service, for developers,
for managing email servers. IMAP, SMTP, Git… all supported! just signed up… $2 a month per user. Lets
see how this works…

Hubic and Duplicity

I mentioned HubiC in my last post, and in it i said that you could use Duplicity for backups. Well, this is how you get it to work…

First, i am using Ubuntu 14.04 (i think…). I use Ubuntu in house for a few things:

  • its running Tiernan’s Comms Closet, GeekPhotographer and Tiernan’s Podcast all in house, aswell as being used to build this site. The Web Server and MySQL Server are seperated, MySQL running on Windows, web on Ubuntu… but thats a different story…
  • I have a couple of proxy servers running Ubuntu also
  • Other general servers running Ubuntu… dont ask, cause i cant remember what they do half the time…

So, Duplicity is a backup application. From their website:

What is it?

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync’s rdiff to directories—it can be used to produce signatures and deltas of directories as well as regular files. These signatures and deltas are in GNU tar format.

So, how do we get it working? Well, givin that i am on Ubuntu, these are the steps i needed to do:

  • first, we need some credentials and API keys… If you havent signed up for HubiC Do so now… That url gets you an extra 5Gb if you sign up for free (usually 25Gb) or if you pay 1EUR a month, you get 110Gb (usually 100Gb) and 5EUR a month gets you a staggering 10TB (yup! Terabytes!).
  • Login to Hubic, and in the menu go to ‘My Account’, ‘Developers’. in here, create a new application (name and URL to redirect to… http://localhost seems to work correctly). Get the Client ID and Secret ID that was given to you.
  • take the contents of the following gist and replace your own details… I know, i am not a fan of sticking my password in a txt file… but it should be your local machine…
  • that file should be in your home directory and should be called .hubic_credentials.
  • add the duplicity PPA project (https://launchpad.net/~duplicity-team/+archive/ubuntu/ppa) to ubuntu using the add-apt-repository command (details on the link above, under the link ‘read about installing’). for me, i just called ‘sudo add-apt-repository ppa:duplicity-team/ppa’
  • install duplicity by doing ‘sudo apt-get install duplicity’. Dont forget (its in the tutorial above!) to do an ‘sudo apt-get update’ first!
  • When i ran that, there where a few extra Python packages to be installed, so i was asked did i want to install them… Say, yes.
  • Now, to run a backup we run the following command:

duplicity ~/ cf+hubic://location

  • cf+hubic is the backend to use, ~/ is the url to backup (my home directory in this case) and location is where on Hubic we want it stored. If this doesent exist, not a problem… it will create it.
  • after we run this we… ahhh… i get an error:

BackendException: This backend requires the pyrax library available from Rackspace.

  • right… pyrax library is from Rackspace and is available to download though pip…
  • I seem to have python and a few other bits installed on this machine, so running ‘sudo pip install pyrax’ works… Your millage may vary… [eg, this is out of scope for this tutorial! your on your own!]
  • Other problem… I got a load of weird and wondering errors like this:

AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'SplitResult'

  • I fixed these by running:

sudo pip install furl --upgrade

  • FINALLY! ITS ALIVE!!! by default, it asks you for a key for the GnuPG encryption… and its all good! the first backup creates the directories, required files, etc. the next time you run the command, it will only upload changes. it will also ask for your GnuPG code you entered, so remember it!

And thats all folks! Any questions, leave them in the comments!

Hubic, OpenStack Swift and Curl

HubiC is an online storage site, built by the guys at OVH. They are currently offering 30Gb free (if you use the link above) or if you pay, you get 110Gb (insted of the usual 100Gb) for EUR1 a month, or 10.5TB (yup… TERABYTES!) for EUR5 a month… Thats a crazy amount of storage for a not crazy amount of money!

So, while playing around with different things, I found they have an API, so other than the usual apps to play with (like the Hubic Apps for iPhone, Android, Windows Phone, Windows Desktop and OSX, Duplicity for backing up *nix boxes, and a few others) you can build your own…

But first, i needed to figure out how… So, after a lot of arsing around in Linux shells with curl i finally got some stuff working!

First, i used the Hubic sandbox to get the keys… its quite simple to walk though… this gets you your Access Token (see step 3). next, we need to get the Endpoint from Hubic: This GIST shows more:

Quick walkthough:

the first CURL request is to the HubiC API to get the credentials… this gives you a JSON response with a token and a endpoint URL aswell with an expire time…

The next request gets you a list of all files (or at least a load of files in my case) of whats in your folder. the default name here is my folder… I think its what everyone starts out with in HubiC… if you remove it, you will see all your top level folders.

next request i tried was to upload a file… the filename part is where you want it to be stored. this must exist on your local machine.

finally, downloading of a file… pass in the location of the file on the server (listing files will give you the location) and then -o in curl shows the output location…

Simples! now to get this working in c#… Full OpenStack Swift API is available to show how to do more… hopefully it will help in my C# coding…

Compressing and UnCompressing Protobuf items in C#

Part of a project i am working on required sending large amounts of data between different instances. To get this to work efficially, we started using the ProtoBuf using ProtoBuf-net in .NET. but the files where still quite large (17mb, give or take). So, we looked into compression…

here is some examples of how we managed to compress the protobuf files. We got some decient compression: 3mb files, down from 17mb. very happy.

to compress an object (obj) and write to a temp file (tmpfile):

to decompress the object back to a known type:

Handbrake Cluster

[UPDATED] someone asked in the comments if there was an binary build for this file. there is now! http://handbrakecluster.codeplex.com now hosts the code and binaries, and will soon have help files and documentation.

A few days back, i wrote a post titled Powershell + Handbrake + AppleTV + iTunes = Automatic TV… ish. In it i included a block of Powershell code to bulk convert TV shows from whatever format you had them in to a M4V format for the AppleTV. Well, as they say “If necessity is the mother of all invension, lazyness must be the father”. I have a lot of shows i wanted converted to the AppleTV, so i built something… Its called HandBrake Cluster and is written in .NET 4.5, uses MSMQ and Handbrake to do the processing… The workflow is as follows:

  • setup the system as described on the HandBrake Cluster site.
  • run the adder program with the paramaters required (location of files you want converted, type of files to find, where you want the files to be placed, output file type)
  • run the cluster EXE on as many machines as you want. each machine will need to point to the correct MSMQ on the head node, have their own copy of Handbrake, and must have access to the fileshare that you are reading and writing to…
  • each node will take a message of the queue, process the file and then mark it as completed. There is code to see if the message has failed, so, in theory, if something goes into the queue, it should always be processed…

I have run this at home on a couple of different machines, and so far so good… my room gets a bit warmer when i kick this off, and between the 3 machines i ran it on, my FPS count went from just 80-120 on the Godbox, to a total of about 160 – 240 FPS (Godbox = 80-120, Server 1 and 2 are about 40-60FPS).

The next thing i managed to do was tweak my import process for iTunes. I am using a program called iHomeServer for iTunes which is running on the GodBox. It monitors a folder, which is where HandBrake Cluster is writing to, and adds them to iTunes. I can then tweak the metadata using the tool, so i can add art work, tell it which shows are related, and it sets up Art work, title info, etc. It is very handy, and something i am very happy with.

PowerShell + HandBrake + AppleTV + iTunes = Automatic TV… Ish…

I have an AppleTV in the house (3, actually) and I am very happy with its ease of use, size and cost… You can’t argue with the small price!

I also have a lot of content that works great with the AppleTV in iTunes, but I have content which does not work so great with the AppleTV… So, I needed to find a way to convert files quickly and easily… that’s where PowerShell and Handbrake come in…

  • in the code above, you need to set the path of where your files live. in my case, they live on a NAS.
  • next, set the location of HandBrake… I have a 64-bit copy of Windows and a 64-bit copy of HandBrake.
  • set the new file name to where you want the file to go. in my case I have it set to my “Automatically Add to iTunes” folder, which is a magic folder for iTunes that copies any files dropped in there to your iTunes library.
  • finally, conversion is run…

This may take a few min, depending on a few factors:

  • how many files you are converting
  • how fast your machine in
  • how fast your machine can read and write the files…
  • etc…

I have set files to convert on 3 different machines (the GodBox and 2 other servers) and I am getting speeds of anywhere between 250FPS (on the GodBox running 2 instances of HandBrake CLI) and 40 – 60 FPS on the older servers… on the remote machines, they are sending files to the GodBox folder also, so once everything completes, it’s just a matter of opening iTunes and we are good to go… Now to figure out how to automate the Metadata import…

Building a Cross Compiler for your Raspberry Pi

My main machine at home, known as “The GodBox” is a Dual Processor, Quad Core Xeon 5520 with 60Gb RAM, 2 300Gb 10,000 RPM Western Digital Velociraptor in RAID 0 for boot, 4X1Tb 7200RPM drives for storage, 2 more 300Gb 10,000 RPM drives for “scratch disk” and a couple high(ish) end graphics cards with 3 monitors plugged in… Hence the name, GodBox!

Anyway, The Raspberry Pi, on the other hand, has a 700Mhz processor, 256Mb RAM and not much else… So, if you need to write code for your Pi, and you don’t want to wait a long time to compile, check out this tutorial on how to build a cross compiler for your raspberry pi which will allow you to build your apps on a different machine… I have a college project which the Raspberry Pi will be used for, and i am thinking this will be how i build code.