Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Building WANProxy on Ubuntu 12.04

[Updated 2016/04/09] I had to use this today to build on Debian 8.3 for 2 different boxes. So, making minor changes (url to git proxy, and where you build from) to make sure this works now.

I have been looking into WANProxy for a while now, but never successfully got it to build… I have been more successfull reciently, so here is what you need to do.

** NOTE **: I built this on Ubuntu 12.04, so these are the tips for that… Not sure about other Distros…
** Second NOTE ** : I am using the Digows GitHub Repo for downloads… There is also the WANProxy SVN Server and their official downloads page.

sudo apt-get install build-essential git-core libssl-dev uuid-dev

git clone https://github.com/wanproxy/wanproxy.git

cd wanproxy/programs/wanproxy

make
  • Note -: git-core is needed to checkout code, build-essentails gives you your essential build utils, and the -dev files are needed for the build
    *- NOTE -: I have tried it on a couple VMs and they take about 5 min to build… If you are building on Physical hardware, it may be faster. Also, as a general note, if you are building with multiple processors, try add the -jX option, where x is your CPU count +1. for example, make -j5 if you have CPUs or Cores. or make -j17 if you have 16 cores…

    cp wanproxy ~/local/bin

~~Now, this is, so far, as far as i have gotten… but given its building, and its further than I was a few weeks back, i though i would post incase i can help someone else… ~~

** UPDATE ** I have successfully managed to get a WANProxy working. The way i have it setup is as follows:
** UPDATE 2016 ** as part of my series on getting double internet speed, I am back looking at WANProxy… more on that later..

  • Linux box in house, and VM on laptop.
  • both have WANProxy installed
  • use the ** Proxying over SSH ** example from the WANProxy examples site which shows you how to proxy a single web server over SSH.
  • in my case, i pointed the port at my proxy server inhouse. I also changed the if0.host address from 127.0.0.1 (only accessable from that machine) to its internal IP address (can be seen by anyone on that network)
  • finally, I told my browser to use the WANProxy ip and port 3300 (see the if0 config section) as its proxy.

Works grand so far. no idea yet if its “faster” but its working, which is a start…

Raspberry Pi now with 512MB RAM, other random links…

Earlier on today, the Raspberry Pi Foundation announced that the Model B will now be shipping with 512Mb RAM as standard, with no price change. I posted the link up on Hacker News and its caused quite a lot of happy people!

So, with the news of extra RAM, its started making me think of more things the Pi could be used for…

  • An Austrian hosting company are offering free Co-Location for Raspberry Pis with a 100Mb/s uplink and 100Gb bandwidth! FREE! with 512Mb RAM (or even 256Mb RAM) thats enough for a small site. or even a medium site (like this one) running statically.
  • Since the Pi can connect to the internet using 3G, you could install a copy of Squid and use an SSH tunnel back to your main office or home, have have multiple levels of caching going on… It would also secure your browsing.
  • Sending SMS messages though the Raspberry Pi could be useful for sending diagnostic info if the device is remote…
  • [XBMC on the Raspberry Pi] would make your media center a lot smaller, use less power, etc.

just a note on the idea of using 3G and Squid for the Pi… This is something i am interested in, so its something i want to start playing with. The idea would be as follows:

  • have a linux box in house with Squid and SSH enabled, and port forward SSH to the linux box.
  • tell the pi, on boot, to try and connect to a 3G connection.
  • once connection is live, connect to SSH tunnel
  • Squid should already be configued to load and use the upstream Squid box as an upstream cache
  • local squid should use some storage on either USB or SD for local cache

Also, using something like WANProxy on both ends should make things faster also… Having the Pi, a 3G modem, USB key (optional) and a battery pack, all in a small box, with a Wifi Adapter, should give you a faster mobile internet connection… And if you could get 2 or more 3G modems (using a Powered USB Hub of some sort), you could do load balancing…

more raspberry pi and camera antics

A while back I posted about the Raspberry Pi, and in the post was a link to a Photographer who was embeding a Raspberry Pi into a Canon 5D MKII battery grip. Well, its been a while, and i have been thinking about the Pi and Cameras, so I went looking around… Here is what i found.

The one thing i have not been able to figure out is how to tell the Pi to take the photos out of the camera wihtout having a monitor plugged in. I was thinking either tell it, on boot, to start monitoring the camera and download everything. This way, if you have it plugged into a external power source, it will be monitoring and downloading to somewhere… USB HDD, USB Key, etc. If there is a Wifi spot around, try uploading them to a location, posibily manageable via web interface of some sort… Lots of interesting ideas can be done… its just a matter of doing them… 🙂

RouterOS Using Host names in Firewall Rules

As a follow-up to yesterday’s post on RouterOS Blocking Machine access to all but one IP, I thought I would show how to add extra IPs to that list, without having a shedload of firewall filters.

  • First things first, get your list of IPs you allow access to. In my case, I just did an NSLOOKUP on the name and got the IPs.
  • Create an “Address List” in RouterOS. This can be done on the Web Interface by going to IP / Firewall / Address List and clicking Add. I had none previously, so I created a new rule, naming it ExpressVPN (the lads I use for VPN access) and added the first address.
  • this is where things get interesting. for extra IP (for ExpressVPN, I have 4) you create a new address with the SAME name, but different IP.
  • in your firewall rule, you should have either an src address or a dst address. in my case, I had both, but this was a change for the dst address. I removed the address from the rule, and I added it as a dst address list entry. If you have multiple address lists, you will see them here.

to do this at the command prompt:

this will block any traffic, other than the IPs in the expressVPN address list, for the machine 192.168.0.123.

RouterOS Blocking Machine access to all but one IP

So, I have a machine on my network, which should be only connecting to the internet through a VPN. I needed to tell my RouterOS box to block all access, except to this said IP address… The following should do the trick… YMMV

this will drop any packets from the srcaddress (IP address) that are not for the destination dstaddress (IP address). in my case, dstaddress is the VPN server I want to connect to. So, in theory, all packets should just go through the VPN and not leak out into the rest of the network… again, still testing this so be careful!

ZFS iSCSI NFS SFTP Hyper-V and more

As part of my new task to make my files safer and backups faster, and, well, cheap, I am looking into ZFS for my storage needs. My needs are as follows:

  • Allow me to store lots of different types of data (Photos, Videos, Music, VMs) in different formats (RAW and JPG photos, MP4, AVI and DivX Videos, with DVD and BluRay rips also a posibility, MP3 music and VHD files from HyperV, inclduing ISOs and Snapshots). I also need to store different file systems using iSCSI (Mac and Windows clients will be mounting the storage).
  • must be safe. DO NOT LOSE DATA!
  • must be somewhat fast. I have VHDs weighing in at 100Gb… my photo collection is 600Gb. If i need to move or copy files to the storage system, it must be fast.

So, ZFS offers all these features. I can export a file share as iSCSI, NFS, SMB, etc. All works well. But the replication stuff is the interesting part…

The plan, which i am working on, is as follows:

  • have 2 machines setup: one in house and one in a datacenter (I have a dedicated box in the Hetzner data center). both could be VMs (the one in the datacenter will more than likley be a VM).
  • use the storage on the local system for whatever i need backed up.
  • have a script which will take a snapshot of a given pool every 4 hours or so…
  • that script should also dump the snapshot to a temporary location on the machine using ZFS send.
  • that file should be checked, compressed, broken up into little bits and checked again… checking is important!
  • take those little bits and send them to the datacenter, which will do lots more checking and import the files into the ZFS pool over there…
  • there may even be a two way system to send from the datacenter back to the house…
  • finally, the remote pool should be dumped to an SFTP backup system that Hetzner give me… Currently set at 100Gb, but can be increesed as needed…

Thats the “plan”… Lets see how it actually works out…

Anyway, parts of the process i need to tweak:

  • uploading and using as much of my upload bandwidth as possible (2x10mb upload connections…) if i am backing up 800Gb, which should be my first backup, i would like to use both pipes to the fullest… on a single connection, at 50% capacity, it would take 15.1 days to upload. if i can get both connections to work at 80% capacity, giving me 16Mbits/s, it would be down to 4.7 days. With compression and Deduplication, i can probably bring that down a bit more…
  • backing up to SFTP… Reading different things is telling me this might not be such a good idea…

Some links which you might find useful:

More Jekyll Stuff

Couple of bits and pieces on Jekyll stuff today… I am tweaking the outline of the site, so i am surfing around finding stuff… here is what i have found

  • Host a static site on Amazon S3: Interesting idea, and something i would look into eventually… And with the help of CloudFront you could host your whole blog on a CDN!
  • Rake tasks for Jekyll: Rake is the Ruby version of make… and a RakeFile can have tasks, which are in Ruby… They can do, from what i can gather, pretty much anything… So, some examples of what you can do with them are linked here… I especially like the New Post generator… very handy!
  • Jekyll Plugins: Various different plugins for Jekyll… I am interested in a few of these, mainly the Generate_projects one, which generates a page for your projects based on your GitHub projects… very cool stuff…
  • Strictly speaking, this is not just a Jekyll how to, but Migrating from WordPress to Jekyll is a handy read. my main blog, my podcast and photography blog both run WordPress. migrating them to Jekyll would mean i could move them directly to a CDN and make things a lot faster… Maybe something i plan doing soon…

If you have any tips or tricks, why not leave a comment and i can add them to the post.

Handbrake Cluster

[UPDATED] someone asked in the comments if there was an binary build for this file. there is now! http://handbrakecluster.codeplex.com now hosts the code and binaries, and will soon have help files and documentation.

A few days back, i wrote a post titled Powershell + Handbrake + AppleTV + iTunes = Automatic TV… ish. In it i included a block of Powershell code to bulk convert TV shows from whatever format you had them in to a M4V format for the AppleTV. Well, as they say “If necessity is the mother of all invension, lazyness must be the father”. I have a lot of shows i wanted converted to the AppleTV, so i built something… Its called HandBrake Cluster and is written in .NET 4.5, uses MSMQ and Handbrake to do the processing… The workflow is as follows:

  • setup the system as described on the HandBrake Cluster site.
  • run the adder program with the paramaters required (location of files you want converted, type of files to find, where you want the files to be placed, output file type)
  • run the cluster EXE on as many machines as you want. each machine will need to point to the correct MSMQ on the head node, have their own copy of Handbrake, and must have access to the fileshare that you are reading and writing to…
  • each node will take a message of the queue, process the file and then mark it as completed. There is code to see if the message has failed, so, in theory, if something goes into the queue, it should always be processed…

I have run this at home on a couple of different machines, and so far so good… my room gets a bit warmer when i kick this off, and between the 3 machines i ran it on, my FPS count went from just 80-120 on the Godbox, to a total of about 160 – 240 FPS (Godbox = 80-120, Server 1 and 2 are about 40-60FPS).

The next thing i managed to do was tweak my import process for iTunes. I am using a program called iHomeServer for iTunes which is running on the GodBox. It monitors a folder, which is where HandBrake Cluster is writing to, and adds them to iTunes. I can then tweak the metadata using the tool, so i can add art work, tell it which shows are related, and it sets up Art work, title info, etc. It is very handy, and something i am very happy with.