Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Compressing and UnCompressing Protobuf items in C#

Part of a project i am working on required sending large amounts of data between different instances. To get this to work efficially, we started using the ProtoBuf using ProtoBuf-net in .NET. but the files where still quite large (17mb, give or take). So, we looked into compression…

here is some examples of how we managed to compress the protobuf files. We got some decient compression: 3mb files, down from 17mb. very happy.

to compress an object (obj) and write to a temp file (tmpfile):

to decompress the object back to a known type:

GIT tips and tricks

I use GIT a lot for different things, including this blog. so, here are a few tips and tricks i have found useful over the while…

Symform – P2P Backup

I have previously posted about CrashPlan as my Backup System. I also, a long time ago, talked about Backing up SQL, MySQL and other stuff on my other blog. Well, CrashPlan is all good, but there are 2 “niggly” bits with it…

  • Its not FREE (well, this year i got it Free on Black Friday…) but it is cheap ($120 a year to backup 10 machines to the cloud aint bad.)
  • Its NOT FAST! The CrashPlan Datacenters all live in the US, and my servers live in Europe (either Dublin or Germany). So, bandwidth is limited… Getting less than 1Mbit/s most times, but have seen it reach 3… I have 20Mbits/s upload… even half that would be nice…

So, thats where Symform comes in. Symform is a P2P Backup Service, which runs on Windows, Linux and MacOSX. In theory, it should run anywhere that has a Mono runtime since its written in .NET. Anyway, you start with 10Gb of free storage, and you can increese that by one of 2 ways:

  • Pay money: for $0.15 per month, you get 1Gb of storage in the cloud
  • Pay Bytes: For every 2Gb “Contributed” (which is actually more like a pledge than a contribution… more on that later) you get 1Gb storage in the cloud.

It works very well, and is nice a fast too. I have a few machines in house which are contribting stoage, a total of about 2Tb, and I have been given 1Tb storage in “The Cloud”. There is a lot more on how this works on their “How Symform Works” section of their site.

I mentioned the “Contribution” VS “Pledge” up above… I have a machine in the house where i have Pledged 1Tb of storage. In reality, Symform can use the full 1Tb of storage, if it needs to, but is currently only using 168Gb. Now, that could just be that the machine is still getting files, and it will end up using the full 1Tb eventually, but either way, its all good.

Also, as a couple of notes on Contribution and Backups:

  • The machine needs to be online and accessable on the internet at least 80% of the time, but 24/7 is ideal. If you drop below the 80%, your account can be suspended.
  • your machine needs to be publically accessable, meaning port forwarded. I have a couple contribution machines in house, so they each have seperate ports forwarded to them.
  • Given the P2P nature of the software, lots of connections to different machines are made… if you are behind a firewall, you may need to allow all or most outgoing connections. If you are on a really restrictive firewall, you may want to stick a contribution box in your DMZ and probably use the Turbo Seeding feature.
  • Turbo Seeding is a handy feature, especially for Laptops… only problem is its Windows Only… So, importing and exporting does not work on Linux or OSX.
  • The software can managed Work and Non Work hours, and will limit the upload and download speed during this time. Also a nice feature…

So far, so good. Very happy with the software, but would like a nicer interface to see whats going on. At the moment, you are either limited to using the web interface, which aint bad, but not great, or watching the log files… I would also like the ability to prioritize certain files or folders, so, for example, upload my documents folder before anything else, and if anything changes in there, even if its uploading from somewhere else, pause and upload the documents folder… Just a thought…

moving your TMG SQL server Logs DB and other TMG tips

In house, I have been using Microsoft TMG 2010 Server for a while now. I use it as a firewall for some of the machines on the network, and also as a proxy for most, if not all, machines. When acting as a Firewall, all traffic flows though the machine, be it HTTP/HTTPS, SMTP/POP3/IMAP, or anything for that matter. You can also lock down ports on the box, which is a feature of most firewalls, but i like TMG due to its relitive ease of use…

Anyway, one problem with routing all traffic from different machines though TMG is after a while, the logging starts getting big. Single TMG by default is set to use SQL Server, it can start using lots of memory, hard drive space, etc. So, there are a couple of articles which should make moving your TMG’s SQL DB to a different machine easier…

Some other tips you may find useful

  • If you have Malware Inspection turned on, but you know there are certain sites that wont serve Malware (for example, Ubutnu Archives or YouTube.com) you can add these to the “Destination Exceptions” list. Under “Web Access Policy”, click “Malware Inspection” and click “Destination Exceptions”. Double click on the “Sites Exempt from Malware Inspection” and add your URL. I put *.ubuntu.com and *.youtube.com in here (Microsoft Updates are already on the list). Now, when downloading files from these locations, they do not run though inspection and save CPU cycles. WANRING You need to trust these sites!
  • There is a nice little app to add into TMG called Bandwidth Splitter which allows you to not only monitor what traffic is going though your network, but also put limits on different machine sets, users, etc. There is a Free editon which works with only 10 clients, but does what i need it to do to start with.

RouterOS Dynamic IP Updates

I have been using a MikroTik RouterBoard RB750 for a while now, and i love it! Over the weekend, i upgraded it to a RB1100. Its the same software running the device, but the device is faster (800Mhz PowerPC chip VS MIPS-BE at 680Mhz), has more memory (512Mb upgradable to 1.5Gb vs 32Mb) and more storage (think its 512Mb on board, plus 4Gb MicroSD card vs 32mb…). It also has more ports (13 GigE VS 5) and 2 Switch Groups, which i have no idea what they do just yet…

Anyway, part of getting the RB1100 online and taking over from my existing router was getting Dynamic DNS updating. I use both Dyndns and NoIP for DNS, but I also like the look of Amazon Route 53. For updating NoIP, I am using the Alternitive script from The Mikrotik Wiki. To get this to work with DynDNS, it would just be a matter of chaning the URL you point to… I am going to write a web script which will sit internally on the network, use that instead of the No-IP url. When that script gets called, i can log the info, update DynDNS, NoIP and Router53 all in one go. I will be posting more about that soon…

And while we are on the topic of Scripting RouterOS, checkout the Mikrotik Script Example Page. Lots of good stuff up there!

Custom MSDeploy OverWrite Rules

I have a project which we are trying to automate the deployment system. The plan is to automatically deploy the project to a staging server anytime the build succeeds from SVN.

I have had a few problems with this, but here are some of the links which may come in handy for you.

Still some tweaks to get this to work… If i find any more links i will put them here… The problem we are haivng is when a deploy happens, the WebDeployment tool cannot overwrite the log files directory, since they are in use… one option would be to restart IIS, which would be ok in staging, but we want to keep the logs in test and production, so, we need to figure out how to tell Web Deploy not to over write the files.

WANProxy and Squid with Upstream Servers

In my previous post on WANProxy, i did not really go into detail about what it actually was. The direct quote from their site is WANProxy is a free, portable TCP proxy which makes TCP connections send less data, which improves TCP performance and throughput over lossy links, slow links and long links. This is just what you need to improve performance over satellite, wireless and WAN links. This is something that has interested me for a while, so i have been looking into it, and so far so good… In my last post i mentioned i was proxying Squid traffic, in todays post, i still am, but with some tweaks.

  • i have a Squid box running in the house. It is connected to 2 Cable Modems, giving me 250Mbits/s down and 20Mbits/s upload connection. It is caching data locally also.
  • on my laptop i have Squid running also. It connects to a WANProxy server at home and proxies the Squid server. The local squid box is using the home Squid box as an upstream connection.
  • for upstream connections i used the following lines in the squid.conf:

I have set it to never use a direct connection, which is probably silly, since if i loose the WANProxy connection, i loose connectivity… Also, port 3300 is the WANProxy port its listening on.

So far, so good… I am also thinking this could be an interesting thing to get working on a Raspberry Pi…. Just a though… 🙂

Building WANProxy on Ubuntu 12.04

[Updated 2016/04/09] I had to use this today to build on Debian 8.3 for 2 different boxes. So, making minor changes (url to git proxy, and where you build from) to make sure this works now.

I have been looking into WANProxy for a while now, but never successfully got it to build… I have been more successfull reciently, so here is what you need to do.

** NOTE **: I built this on Ubuntu 12.04, so these are the tips for that… Not sure about other Distros…
** Second NOTE ** : I am using the Digows GitHub Repo for downloads… There is also the WANProxy SVN Server and their official downloads page.

sudo apt-get install build-essential git-core libssl-dev uuid-dev

git clone https://github.com/wanproxy/wanproxy.git

cd wanproxy/programs/wanproxy

  • Note -: git-core is needed to checkout code, build-essentails gives you your essential build utils, and the -dev files are needed for the build
    *- NOTE -: I have tried it on a couple VMs and they take about 5 min to build… If you are building on Physical hardware, it may be faster. Also, as a general note, if you are building with multiple processors, try add the -jX option, where x is your CPU count +1. for example, make -j5 if you have CPUs or Cores. or make -j17 if you have 16 cores…

    cp wanproxy ~/local/bin

~~Now, this is, so far, as far as i have gotten… but given its building, and its further than I was a few weeks back, i though i would post incase i can help someone else… ~~

** UPDATE ** I have successfully managed to get a WANProxy working. The way i have it setup is as follows:
** UPDATE 2016 ** as part of my series on getting double internet speed, I am back looking at WANProxy… more on that later..

  • Linux box in house, and VM on laptop.
  • both have WANProxy installed
  • use the ** Proxying over SSH ** example from the WANProxy examples site which shows you how to proxy a single web server over SSH.
  • in my case, i pointed the port at my proxy server inhouse. I also changed the if0.host address from (only accessable from that machine) to its internal IP address (can be seen by anyone on that network)
  • finally, I told my browser to use the WANProxy ip and port 3300 (see the if0 config section) as its proxy.

Works grand so far. no idea yet if its “faster” but its working, which is a start…