[NOTE] This part 5 in a series of posts. The rest can be found here.
This post is going to be an update and theoretical post. probably very little “new” stuff going on here, mostly updates, and what I am planning on doing later on.
This week, I have been OOF sick, so I have not done much work, but I have been surfing the web, watching videos, downloading stuff, etc., so I have an idea of how things are going. First, as mentioned in the previous post I have MPTCP, Squid, Socks Servers, OpenVPN and IPTables doing their magic. 2 OpenVPN tunnels between the house and Digital Ocean. All TCP Traffic (bar port 80) is sent over socks to the box in the cloud using RedSocks. All UDP traffic is sent direct over OpenVPN. Since MPTCP is in the mix, all socks traffic is actually split over the 2 connections. All port 80 traffic, and 443 (if the client is using local Squid as their proxy) is sent round-robin between the 2 upstream IPs to Squid (2 OpenVPN end points).
Things I have noticed:
Every now and again, RedSocks crashes… just full on dies. It’s just a matter of starting again, but it’s a pain…
I have had to restart squid a couple of times… not too often though
there was a power outage in the house a few days back… so, when everything came back online, it was a bit of a pain bringing all connections back to life. I do have to figure out a better plan
I still have to read more on this ECMP stuff. Hopefully it will do what I am hoping.
Now for the theoretical stuff. I started thinking, could this work outside the house? Could you build this into something smaller, like a Raspberry Pi, and stick 2 or more USB Modems in, connect it back to a server in the cloud, setup P2P OpenVPN connections and then get more than a single modem speed download? The problems I can see are around MPTCP. I am not sure if it has been ported to ARM to run on a Raspberry Pi. Second, the max you could ever get out of it is 100Mbit/s, given the 10/100mb network port on board… and you may need extra power for the USB dongles. Also, getting P2P connections may be complicated, given the non-static IPs on the modems, though, in theory, non-P2P OpenVPN could work… Again, it’s a theory. I had the though and that’s where the title came from… anyway, throwing it out there…
I am also noticing that I am starting to hit the limits of my upstream VM. If downloading or uploading at speed, the processor cores (2 in the case of the box I am currently running) are pegged at pretty much 100% full… Well, 80ish, but that because the other 20% is being used by Dante. I am noticing I can hit a full 72Mbit/s up, but the max currently downloading is about 400, maybe 450… Need a faster box now…
I mentioned port 80 not being set over socks. That’s because its redirected to Squid. Squid (in house) then uses Squid (in cloud) as a parent. There are 2 round-robin parents for squid, one on each OpenVPN connection IP address.
all other traffic (UDP, ICMP, etc.) are sent over the OpenVPN connection… currently only one is picked, but I have a cunning plan…
The cunning plan? Well, if I am reading the internet correctly, and I would like to think I am, I thinkECMP, or Equal Cost Multi-Path Routing, could help… Again, it’s a fledgling idea currently, and I am still reading the documentation, but if it works… Well… I not sure… let’s see…
[NOTE] This part 3 in a series of posts. The rest can be found here.
In Part 1 of this series I explained the why and what I wanted to do for this “project”. In Part 2 I did some basic testing of both MPTCP and MLVPN. I also mentioned trying MMPPP using vtund but it has been a while since I did that testing, and it had not been on bare metal. So, this post is a follow up, where I am using bare metal.
So, first, the setup:
ProLiant box is running Debian 8.3 x64, and has both vtund and ppp installed
walked though the guide from John Lewis and made some changes to the configs. the main ones are mentioned below
Once done, i installed both iperf and iftop on both boxes, and ran
on the Digital Ocean box and
iperf -c 192.168.10.1 -d
on the local box. And, well, the results where not as expected. Pretty poor actually:
First, using Squid installed on the DO box, i tried using WGET to download a file using it. If I did this on the DO box itself, i was getting 100MBytes/s… When I ran it over the MLPPP box, well, under 7 was achieved.
Then i though it might have been Squid. So, since the file had been downloaded to DO, i SFTPed into the box over the MLPPP link, and tried again… Again, pretty poor result. I think i seen it hit about 7MB a sec at one stage.
Here is what is showing on the DO box when running the SFTP download. You can see 2 connections from the 2 WAN links at home hitting the box, and they are balanced. Its just nowhere near the speed they are capable of.
I did not get a screen shot of this, but when I tried with iperf, thinking it might have been overhead of SFTP or Squid, I was getting results matching what I was seeing with SFTP. Downloads in the 55-60mbit/s range for download and 40ish for upload. 40 is still faster than 1 link, mind you…
I mentioned that I had made some minor tweaks to the configs from what John had written. Well, mostly it was config changes to how routing was done. In Johns case, he is bonding a DSL and a HSDPA connection, so he had setup to do for logging into his PPP modem and connecting. Also, when he setup the interfaces, he routing tables in there. I have mine setup in a single config file, like as follows:
I have changed the names from adsl1 and 2 to WAN1 and 2, and the IPs are changed from internal IPs to my public IPs. I manually run this when setting up my connection.
Nothing else on his config files have changed. I did not do any of the masquerading stuff, mainly cause this was testing. I just want a tunnel to start with. When reading the vtund.conf file, you can see that encryption and compression are both turned off, and and the same in the ppp configuration. I also don’t think the issue is to do with the CPU performance, since these are the screenshots of top running on both boxes:
in both cases, CPU usage is sub 6% for VTUN and SSH seems to be using less than 10%. So, now, I’m baffled as to why this is not performing as expected… More testing required!
[update 4/4/2016] – fixing images so they are clickable…
Microsoft Build 2016 is on this week, and there were a lot of interesting developments yesterday, but the one that interested me the most is Bash on Ubuntu on Windows. Dustin from Ubuntu has a more details, and Scott Hanselman has posted a technical video about this. This is very interesting, and I CANT WAIT TO GET MY HANDS ON IT! But, I do have some questions, which I thought I would put down in blog format:
Based on the post by Dustin, it seems that low level Linux calls are being handled and translated to Windows system calls. Which makes me think, could any Linux Distro work? Could Arch Linux, RedHat or Centos work in the same way?
Will this Work on Windows Server 2016 when it launches?
Given that it calling down to a low level, could GUI applications work too?
Shut up and take my money! I WANT IT NOW!
So, there are my questions… This is very cool, and I cannot wait to get my hands on this. Just wondering if this will be available to Windows Insiders sooner, rather than later?
[NOTE] This part 2 in a series of posts. The rest can be found here.
In my previous post I explained what i was trying to do… This post explains what i have been working on recently, and performance results…
So, first, what have i tried… There are 3 different things i have tried, and here are some of their details. Some will need to be updated (other parts of this series), and others i will try get back to eventually.
Hardware and servers used
To test this, i am using my HP Proliant ML110 G5 running either Ubuntu or Debian Linux, with 2 GigE connections directly to the cable modems, and 1 connection to the LAN (for SSH and testing). The LAN has no gateway set, and the 2 WAN connections have DHCP enabled. They get fully public IP addresses. Upstream, I am using either Digital Ocean or ScaleWay VPS boxes.
Digital Ocean has the advantage of allowing different Kernels, so i have been using them for testing MPTCP. As for ScaleWay, well, their BareMetal C2S/M/L boxes have between 4 and 8 cores (4 for the S, 8 for the M and L) and between 8 and 32Gb RAM (S=8, M=16, L=32GB). The L model also comes with 256GB SSD (plus the boot disk, which seems to be a network disk of some sort) and they all come with lots of bandwidth (i use the L because its got about 800MBit/s to the internet).
Ping wise, Digital Ocean is about 20-30ms away from the house (I picked London to host the servers) and Scaleway is a little further at about 50ms (They are based in France).
MPTCP (their site is a bit wonky as of writing, so bare with me…) is a Linux Kernel patch that allows TCP connections to use multiple paths… Essentially, if you have Wifi and 4G in a phone, and MPTCP is enabled, it should allow you to use both connections for TCP traffic, as long as the server upstream supports it. It also allows for easy fail over if, say, you lose your wifi connection. There is an example video of it on YouTube which should show the fail over parts and this video shows how they managed to get 50Gbit/s out of a 6 10Gb Ethernet connections.
When i was using MPTCP, I had a copy of Squid on both boxes, and told Squid locally to use Squid on the upstream box (over a SSH tunnel, which was over the MPTCP link) as a parent cache. Using this method, i could see (using iftop) that both connections were being used. When trying proper performace testing, I setup a RAM disk on both machines and copied a Linux ISO to the Digtial Ocean Box. Then, using wget and Axel I downloaded the files using Nginx on the server, and checked the results. I can max out 1 single connection, plus use about 60-80mbit/s from the second. about 420-440mbit/s total. Disk was not the bottleneck, since I was writing to RAM, so more tests are required.
MLVPN is a pretty interesting project that caught my eye. The idea is quite simple: you configure the local box and server, as mentioned in their example guide and run the MLVPN program on the server, then the client. It creates 2 VPN tunnels between the 2 boxes, and bonds them… In my case, i was given an IP of 10.42.42.1 on my box in house and 10.42.42.2 on the server. Any traffic over that tunnel is bonded… Problem is, it seems to be quite processor intensive: my Digital Ocean box was showing one cpu core (out of 2) maxing out at around 80% and my Proliant in house maxing around the 70% mark… all while transferring data at around 100mbit/s. I tried iperf and got the following:
getting 50mbit/s upload is good, in reality, since in theory my max speed would be 72, without overhead. but 116mb/s down is less than a third the max speed of a single connection. So, I tried just uploads and downloads…
Upload Only (from local machine to server)
Download Only (from server to local machine)
As you can see, the download speed has increased a little, to 176Mbit/s, but the upload speed is now at over 60MBit/s!
Still.. download is as important as upload, and given I haven’t managed to get it to max out one connection, never mind 2, even more testing is required…
MLPPP (using VTUN)
This is one i need to come back to… Used the guide from John Lewis but was only managing to get about 100Mbit/s… I was originally using a VM (so disk may have been the issue) and also had the connection behind my EdgeRouter, so it might have been firewall rules causing a slow down. But I do need to come back to this soon… Watch this space.
Well, at the moment, all I can conclude is that there is more testing required. Upload wise, i can somewhat use most of my bandwidth with MLVPN, and I did see promising results with MPTCP. I gave up a bit too early with MLPPP, so more testing is required with that. Also, all tests are using just iperf between boxes. I did use squid with the MPTCP box for a while, but not for proper performance testing. So, even once this is all sorted out, i will need to turn this into a proper “router” too… So, conclusion? this was originally meant to be a 2 parter… now it looks like I will require a lot more parts… Watch this space…
Back in August of 2012, I started this site using Git and Jekyll. I hosted most of it at home, pushing to a server in house. Then, a few years back, I moved to pushing the files to Amazon S3 and had Cloud Front doing distribution. The last moved had me hosting the files in NearlyFreeSpeech.NET and Cloud Flare doing the content distribution… Well, that changed over the last few days… again…
Currently, you are still hitting Cloud Flare when you hit this site, but the backend is back to being hosted on Amazon S3. But the files getting to S3 is more interesting now. All the “code” for this site is up on a GitHub repo and any time something is checked in, Travis CI kicks off, builds the files using Jekyll and pushes to S3 using s3_website. All my “private” keys are hidden in Travis-CI, so no one can access them but me. This makes updating the site a lot easier. I can create a file in GitHub directly, preview it, make changes, etc., and then check in. Once checked in, Travis kicks off, builds and deploys. All Good!
It also means that if “bugs” are found on the site (by you, my dear reader), or if you have queries for some things, a “bug report” can be opened on the issues log. I already have a bug for making the site faster… Anything else you want me to change?
[NOTE] This part 1 in a series of posts. The rest can be found here.
First, a bit of background, and then I will explain what I am currently running in Part 2…
For the last 15 or so years, I have had at least 2 internet connections in to the house… 2 of them have always been Cable Modems from NTL, which became UPC, and now is Virgin Media. When I started, i think the modems where 150/50kbit/s and 600/150kb/s, and have steadily increased in speed, currently at 360/36Mbit/s each… But they have always been somewhat separate, and single thread downloads have always been limited to 1 of the connections… I have been looking for ways around this for years…
It started with a Linksys RV042 router which allowed me to load balance my connections… At the time, and i cant even remember when this was, my total bandwidth would not exceed the router. The RV042 has 2 10/100mbit WAN links and 4 100mb/s LAN links…So, when the connection bandwidth increased, I moved to a new router…
The next router vendor i tried was Mikrotik. I tried a few different options, including an RB1100 and running their RouterOS on x86 hardware… Both worked, well, ok, and the Load balancing with nth stuff did do what i needed, along with other stuff, like routing traffic destined for some sites (like BBC iPlayer) to go over a VPN. But in the end, hardware issues and performance problems with the x86 machine (Mikrotik at the time was limited to 2GB of RAM on x86 hardware) I ended up at PfSense.
PfSense was installed on the same hardware, a HP ProLiant ML110 G5 with 8GB RAM, a Core2Quad processor and 12 GigE Network cards… And, on PfSense, things were good… Performance was stable, load balancing worked as expected, I could set some traffic to go over certain links, etc. all was good… But I lacked IPv6… Plus, the HP used a LOT of power…
Plus, the EdgeRouter does not produce as much heat, and its a LOT smaller that the PowerEdge! It does all the same things I could get PfSense to do, in a lot smaller package (I could, in theory, get a smaller box for PfSense).
So, where does that leave us? Well, I now have 720Mbit/s down and 72Mbit/s up, if I can do multiple threads for uploading… But what if I don’t? What’s next? Well, in the second post, I will explain what I have been trying to do in resent weeks, and what I can do now…
I have 2 new side projects to announce on the site today. First has been running for a while (first check-in was December 28th) and it’s called B2Uploader. Its a fairly simple Windows application to upload files to BackBlaze B2. If you are not familiar with BackBlaze, they provide unlimited backup storage for the low price of a fiver a month. They are the guys who design the BackBlaze storage pods (I want one, by the way!) that allow them to provide unlimited storage for the fiver a month (I currently backup over 4Tb to them!), and late last year, they started offing B2 which is a storage platform on their pods, and it has a (somewhat) easy to use API. AND ITS CHEAP! half a cent, up 0.5c, per gig stored per month! That’s crazy cheap!
B2Uploader uses the B2 API to upload files (it could do more, but currently, as the name suggests, its upload only). Its quite simple, and all the code is available. More stuff coming over the next few weeks. some of the usual badges for open source applications are below. if you want to shout at me, shout in the Gitter chatroom and I will reply. You can see the latest builds over on travis-ci, and the latest releases are available on GitHub.
The second project is still in the planning phase, and it’s an update to an older project I was working on called HubicTesting. The name is very cleverly called, wait for it… HubicTesting 2.0! I have mentioned Hubic before here. Cheap (about a tenner a month) for lots of storage (10TB!) but an odd API.. It uses Swift for storage, but has a weird(ish) API for authentication. Anyway, more details will be on the site once I write it up.
So, anyone needing to upload files to B2, check out B2Uploader. Want to work with stuff on Hubic, check out HubicTesting 2.0. Any questions, drop me a mail or find me on the Gitter channel. Have a good one!