Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Quick tip for internet facing ESXi servers

Quick tip for all you with internet facing VMWare ESXi Hosts. I
have just got my hands on a box on the Hetzner network (more on
that later) and using their LARA system i installed ESXi on it. All was good, then I tried login in a couple hours later and i kept getting errors about my password being wrong… So, i tried a few more times, got pissed off and rebooted the box (had to do a hard reboot, since i couldn’t even get in over KVM). I though this was a hardware issue, or a config issue, and left it… yesterday, i had the console open most of the day, and when looking at something i noticed this:

Well, that’s why I couldn’t login! So, tip: create a second user account, name it something other than root, give it a secure password and use that to login to your ESXi box. Ideally, your ESXi box should be behind a firewall, but in the case of a dedicated server, that may not be financially feasible… Hope this helps someone!

VLANs, Wifi and Mikrotik

About a month ago, while i was recovering from surgery, i attended a Webinar on
Cisco Meraki devices. After the webinar, i was contacted by Maraki and given a MR18 with a 3 year license, to play with and evaluate. So, i set it up in the house and all was good.

Thing is, the wifi in the house was grand previously. I have a Routerboard RB951G which does the job and has no issues. And because i am mostly offsite in the office i work, and because i need to remotely manage the network, the MR18 is going into the office from tomorrow morning. I may talk about the MR18 and the rest of the Meraki gear later on, but this is not that post. This post is about something the MR18 did, and i wanted to do on the RB951.

So, the MR18 allows you to create multiple Wifi SSIDs, each with different encryption and security and can use different VLANs. Now, the Mikrotik does the same, but the VLANs stuff is not that easy to figure out. but essentially, what i needed to do was as follows:

create your new wifi SSIDs:

/interface wireless
add master-interface=wlan1 name=wlan1.10 ssid=vlan10
add master-interface=wlan1 name=wlan1.20 ssid=vlan20

next, create your vlans. these need to be connected back to your main
ethernet connection. In the case of my RB951, there are 5 ethernet
ports. 1 is the gateway back to my Cisco switch and on to my PFSense router. 2-5 are all slaves of number 1, which is a master. So, 1 is essentially a trunk network. So, vlans are created on that.

/interface vlan
add name=vlan10 interface=ether1-gateway name=ether1.10 vlan-id=10
add name=vlan20 interface=ether1-gateway name=ether1.20 vlan-id=20

next, a bridge to connect them

/interface bridge
add name=vlan10
add name=vlan20

and connect them to the bridge

/interface bridge port
add bridge=vlan10 interface ether1.10
add bridge=vlan10 interface wlan1.10
add bridge=vlan20 interface ether1.20
add bridge=vlan20 interface wlan1.20

And thats all i needed to do. I have a Sophos UTM Home edition running on a vm for testing, which vlan10 is connected to. It has an upstream connection back to the PFSense box, which has it firewalled off and allows it outside the network, not nothing else. I am planning on doing this with other firewalls, just to do some testing with. This allows me to connect my phone or laptop, or any other wifi device, to a given wifi connection and then be on my way. I also have an older Dell PowerConnect switch, which, if i ever get around to it, will have multiple connections back to the Cisco and then allow physical devices to connect to different vlans.

Any questions, comments, etc, leave a comment blow.

Using git and Route53 together

so, earlier on today, i was talking about using Git with a DNS service called LuaDNS to update your DNS records. Well, thing is, i have 30+ domains registered, and of them about 25 are hosted on Amazon’s Route53. So, moving ALL of them seems, well at the moment, excessive… So, i went digging…

there is a tool called cli53 which will allow you to manage route53 objects from the command line. It can also export your zones to BIND format and then re-import them if you have made changes… This all came out of a blog post by the guys and gals at netguru who showed how they integrate their DNS records with their Continuous Integration… Now, i have not gotten to that stage, just yet, but its only 1 step more down the road… but I don’t have my zones in bind format… So, how do i do that?

I tweaked their block of ruby code (first time playing with ruby, be gentle with me) and got the following:

essentially, it runs cli53 (you may need to change your path) and then creates .bind files for each zone.

then, using their code below, you can re-import them to Route53:

i have exported all mine, added them to git and done some testing… All seems to be in order… once i do some tweaks, i can get that CI piece working and it should be all magic…

Git Push DNS

There are now a lot of services that have “git push” options available… you can build websites with
Azure and Github, books using ShareLaTeX and now, DNS using LuaDNS. I have one zone
running at the moment (tiernanotoole.net) and you can see the DNS records on github here. I am
tempted at moving other records over soon… but i am currently on Amazon Route53 and 1: its works, so
dont break it, and 2, not sure how to bulk export records from Route53 to Bind or Lua format.

[update] 2 quick updates: 1) their free account, which is what i am using, allows 3 domains and 30 host
records. they also charge less than Route53:

  • route 53 for 10 domains per year cost 50c per domain (first 25) per month, then query charges. total,
    about $60 + queries (@40c per million).
  • luadns cost $29 a year for 10 domains, 5 million (ish) querys a month and 500 host records…

I think i have nearly 30 domain on AWS… so, their $59 a year package, which include 30 domains, would
probably save me money…

and 2) i forgot about one of those git push services… DeveloperMail is a service, for developers,
for managing email servers. IMAP, SMTP, Git… all supported! just signed up… $2 a month per user. Lets
see how this works…

Bulk compressing images for the Web

Now that all my sites are running Jekyll I am trying to get them optimized for SPEED which meant
looking at all the stuff that takes time to download… There are more tweaks (and possibly posts) coming down
the road, but to start, I needed to look at images.

First things first. I’m running this on a Sabayon Linux box, so some of the install commands will be different… (Also, i do need to explain why I moved from Windows to Linux on the GodboxV2, but that’s a different post…)

First, install OptiPNG (they have a Windows build too…) and JPEGOptim

sudo equo install optipng
sudo equo install jpegoptim

[UPDATE] I tried this on an Ubutnu Box, and to install both of these, the package names are the same. so, to install both:

sudo apt-get install optipng jpegoptim

Next, using the Linux find command (this should work also on OSX…) run OptiPNG and JPEGOptim on all pngs and
jpgs in your given directory:

find . -iname "*.png" -exec optipng {} \;
find . -iname "*.jpe?g" -exec jpegoptim {} \;

depending on how many images (and how fast your machine is) it should take a min or two…

That’s it! I did a git status, which showed me all the changed images, and then deployed the Jekyll sites… All
good! That’s it!

Hubic and Duplicity

I mentioned HubiC in my last post, and in it i said that you could use Duplicity for backups. Well, this is how you get it to work…

First, i am using Ubuntu 14.04 (i think…). I use Ubuntu in house for a few things:

  • its running Tiernan’s Comms Closet, GeekPhotographer and Tiernan’s Podcast all in house, aswell as being used to build this site. The Web Server and MySQL Server are seperated, MySQL running on Windows, web on Ubuntu… but thats a different story…
  • I have a couple of proxy servers running Ubuntu also
  • Other general servers running Ubuntu… dont ask, cause i cant remember what they do half the time…

So, Duplicity is a backup application. From their website:

What is it?

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync’s rdiff to directories—it can be used to produce signatures and deltas of directories as well as regular files. These signatures and deltas are in GNU tar format.

So, how do we get it working? Well, givin that i am on Ubuntu, these are the steps i needed to do:

  • first, we need some credentials and API keys… If you havent signed up for HubiC Do so now… That url gets you an extra 5Gb if you sign up for free (usually 25Gb) or if you pay 1EUR a month, you get 110Gb (usually 100Gb) and 5EUR a month gets you a staggering 10TB (yup! Terabytes!).
  • Login to Hubic, and in the menu go to ‘My Account’, ‘Developers’. in here, create a new application (name and URL to redirect to… http://localhost seems to work correctly). Get the Client ID and Secret ID that was given to you.
  • take the contents of the following gist and replace your own details… I know, i am not a fan of sticking my password in a txt file… but it should be your local machine…
  • that file should be in your home directory and should be called .hubic_credentials.
  • add the duplicity PPA project (https://launchpad.net/~duplicity-team/+archive/ubuntu/ppa) to ubuntu using the add-apt-repository command (details on the link above, under the link ‘read about installing’). for me, i just called ‘sudo add-apt-repository ppa:duplicity-team/ppa’
  • install duplicity by doing ‘sudo apt-get install duplicity’. Dont forget (its in the tutorial above!) to do an ‘sudo apt-get update’ first!
  • When i ran that, there where a few extra Python packages to be installed, so i was asked did i want to install them… Say, yes.
  • Now, to run a backup we run the following command:

duplicity ~/ cf+hubic://location

  • cf+hubic is the backend to use, ~/ is the url to backup (my home directory in this case) and location is where on Hubic we want it stored. If this doesent exist, not a problem… it will create it.
  • after we run this we… ahhh… i get an error:

BackendException: This backend requires the pyrax library available from Rackspace.

  • right… pyrax library is from Rackspace and is available to download though pip…
  • I seem to have python and a few other bits installed on this machine, so running ‘sudo pip install pyrax’ works… Your millage may vary… [eg, this is out of scope for this tutorial! your on your own!]
  • Other problem… I got a load of weird and wondering errors like this:

AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'SplitResult'

  • I fixed these by running:

sudo pip install furl --upgrade

  • FINALLY! ITS ALIVE!!! by default, it asks you for a key for the GnuPG encryption… and its all good! the first backup creates the directories, required files, etc. the next time you run the command, it will only upload changes. it will also ask for your GnuPG code you entered, so remember it!

And thats all folks! Any questions, leave them in the comments!

Hubic, OpenStack Swift and Curl

HubiC is an online storage site, built by the guys at OVH. They are currently offering 30Gb free (if you use the link above) or if you pay, you get 110Gb (insted of the usual 100Gb) for EUR1 a month, or 10.5TB (yup… TERABYTES!) for EUR5 a month… Thats a crazy amount of storage for a not crazy amount of money!

So, while playing around with different things, I found they have an API, so other than the usual apps to play with (like the Hubic Apps for iPhone, Android, Windows Phone, Windows Desktop and OSX, Duplicity for backing up *nix boxes, and a few others) you can build your own…

But first, i needed to figure out how… So, after a lot of arsing around in Linux shells with curl i finally got some stuff working!

First, i used the Hubic sandbox to get the keys… its quite simple to walk though… this gets you your Access Token (see step 3). next, we need to get the Endpoint from Hubic: This GIST shows more:

Quick walkthough:

the first CURL request is to the HubiC API to get the credentials… this gives you a JSON response with a token and a endpoint URL aswell with an expire time…

The next request gets you a list of all files (or at least a load of files in my case) of whats in your folder. the default name here is my folder… I think its what everyone starts out with in HubiC… if you remove it, you will see all your top level folders.

next request i tried was to upload a file… the filename part is where you want it to be stored. this must exist on your local machine.

finally, downloading of a file… pass in the location of the file on the server (listing files will give you the location) and then -o in curl shows the output location…

Simples! now to get this working in c#… Full OpenStack Swift API is available to show how to do more… hopefully it will help in my C# coding…

Mobile Phone as a Service

After my post about the Raspberry Pi acting as a VoIP server, and being able to add a 3G Dongle and allowing it to act as a Mobile Phone gateway, it got me thinking… Why not have something that allows you to rent a mobile phone number in a country, send and recieve text messages, phone calls, etc, all from anywhere in the world? Thats where Mobile Phone as a Service comes in…

The theory behing MPaaS is quite simple: A SIM Card for a mobile phone is placed in a USB Dongle, plugged into a VoIP server (Asterisk box, probably a Raspberry Pi) and shared with the user who requests it.

Its only a theory at the moment… Any interest?