User login

Christopher Lorier's blog

02

Jul

2014

Mostly writing, got brad to run a bunch of tests on the openstack cloud for me. But I havent done anything with the results yet. Chapters are coming together pretty well. Who know, I might even finish by the deadline.

26

Jun

2014

I've mostly been writing lately, but I have been fixing some bugs in my network tests as well. And in the process I have started getting some results back.

So it turns out that in a network of 1000 nodes to connect each pair of nodes with two paths you end up creating lot of paths. And when you are writing in python so you arent worrying about destroying your data structures, then try to do 1000 iterations with 3 different algorithms, you use a heck of a lot of memory.

So my week has been spent adding proper memory management as well as things like turning lists into generators and trying to reuse objects and splitting my tests up so they can be run as separate processes to try to reduce that footprint.

In the process I discovered a few bugs, like occasional infinite paths as well as some others that seem to only be common on huge networks which is a nuisance to debug since it takes several minutes to run the algorithms on a huge network.

Anyway, while doing this I have started to get some results back and they are kinda cool. So my three approaches I am testing are: 1. Building two minimally overlapping paths between each pair of points. 2. Building two minimally overlapping paths in such a way that it guarantees that you will never have more than two flows to each destination. 3. Building one path, but at each hop having an alternative path that you can use if the next link is broken.

And it turns out that by far the most efficient method is 3, even though this can require 4 flows per switch per destination. The thing is that you only need those flows on transit hops. Because you dont build the full secondary path you end up with far fewer transit hops. But even cooler is that method 1 is actually more efficient than method 2. Because it builds shorter paths, even though each node can have a whole bunch of paths to a destination (and in the worst case it does end up costing more) on average the shorter paths mean fewer total flows.

05

Jun

2014

Making the path builders work proved far harder than I thought possible, for just implementing a fairly simple algorithm.

I have three systems now, one which just finds any two minimally overlapping paths. This was fairly simple to put together. The second was the one that proved the most difficult, and it involves having two minimally overlapping paths between each pair of nodes using only two flows per switch. The third doesnt create full paths, instead for each node it has two paths it can use to send packets to a given destination, each leading to another node that is closer to the destination, or to the destination itself.

Yeah, implementing the second one of these turned into a bit of a nightmare, what I thought would be a one afternoon job took me all week. So many little edge cases with no elegant solutions I could find all turning the code more and more labyrinth like. It wasnt a very fun week.

Anyway, its all going now, so I am going to do comparisons based on the number of flows, the length of paths found by these algorithms, and how effectively they distribute packets around the network.

Then its mostly writing from here on. I would like to have a full network running this that I can introduce some loss into and test how quickly I can locate the problem and work around it and how many packets get lost in the process, but that seems like a distant goal at the moment considering the time I have left.

21

May

2014

Test results have been coming in, and are pretty much going exactly as expected.

Packet colouring can find loss within 2 seconds, as you would expect. BFD takes a long time to notice low rates of packet loss, but will detect at least as low as 2% eventually. Basically it has to send in the order of 1/(n^3) packets to detect a packet loss rate of n. That all has brought me to the conclusion that packet colouring combined with the in-built link down detection in openflow is by far the most efficient way of finding loss. Link loss is detected immediately, and everything else is detected in 2 seconds.

The counters on the pronto are still pretty dodgy. Using packet counters with multiple bridges on the one device seems a bad idea. My packets are being handled by one bridge but are being counted towards flows on the other.

I've started building path calculators, I have one which tries to exploit openflow fast failovers, so when a link goes down, it only modifies the path at the point where the link down is detected. This could potentially require the smallest number of flows, but unless your network fits certain topological constrains it probably wont. I'm currently implementing the 2 flows per switch version, which may involve very long paths in certain circumstances (basically longer paths end up being prioritised over shorter paths in a lot of circumstances), and I will implement the patented version too, (unless someone tells me not to---I really have no understanding of patent law) just for a comparison.

07

May

2014

I took the first step towards actually writing this week. Figured out my Chapter layout and made a bunch of notes about what to include.

I also completed my first test. It took pretty much all week to run, so I may have to do slightly fewer tests some how. Either fewer repetitions or test fewer different values. Probably both.

I figured out a solution to my multiple paths algorithm, basically just by prioritising the paths from the current node, and having everything fall into line with that. It makes it fairly slow, and it means that you end up prioritising longer paths over shorter ones in a lot of situations, but there is a limit to how bad the paths can be and it works. May still be non polynomial, but it is precaclulatable at least.. Networks cant have all that many nodes right?

29

Apr

2014

I have tests up and running.

There was a bit of an issue with the fact that it relies on using os.system to call tc. This activates loss after a random time period and then it times how long it takes for it to notice the packet loss. There seemed to be a problem if the random time period was too short, the whole program would lock up. So I solved that by adding a minimum time period to the sleep. This seems like a bad way of fixing this problem, but it worked and I have no idea how else I would get around it.

Anyway the test is running and I'm guessing it will be running all week.

There is a new ovs version and a new picos version so I am checking all of our past issues against those again just incase someone fixed them. Fingers crossed..

Also played around with our SDN with Brad. We fixed the issue with it constantly disconnecting from the controller. It seems our drop rule was taking precedence over the openvswitch hidden flows it uses to allow inband control. So all our control traffic was getting dropped. We had an awful work around, but then the new picos version fixed it.

16

Apr

2014

Just been writing code to set up the tests and investigating how to programmatically generate packet loss in a way that will give me the best accuracy in terms of timing.

I havent been able to find a better option than just calling tc from within python with os.system. I am unsure how much that is going to affect the level of accuracy or how I can determine how much it affects the accuracy. So that is a bit of a concern, but I am soldiering on in the mean time.

I also helped Brad with our new SDN some more too. It wont stay up for more than a couple of minutes, because there seem to be issues with the routeflow rules interfering with the hidden rules (which have higher priorities than the routeflow rules, and therefore should not be being interfered with). But the point is that when the neighbour resolution on the switches times outs it cant re-resolve the address of the controller, so it loses connection. It's a bit weird.

04

Apr

2014

My brain pretty much exploded from all the graph theory this week, so I put that aside to work on the actual testing approach.

The issue I have had with the redundant paths thing is I still havent been able to prove that my algorithm will actually always halt. I've tried a couple of approaches around this, but, as easy as it is to look at a picture of a network and pretty much instantly see exactly what needs to be done, turning that into an algorithm is really hard.

But the testing has been much more productive. The first thing to do is to determine exactly how quickly I can run the different approaches, IE how quickly I can poll the flow counters and how quickly I can send hello packets. This also means testing different switch implementations to see if that makes a difference. Then I can test different approaches about how quickly they notice loss of differing natures. Test how much latency is required for false positives, and how easily I can differentiate latency from loss. Then I can do some tests about how much this can scale.

So I have started playing around with the ovs bfd implementation to have a baseline to work from. I wont be able to beat the ovs bfd implementation in terms of speed, but that isnt configurable automatically.

Joe tells me the ovs bfd implementation is being librarified but that probably isnt done. Which is a pity, cause I really dont want to implement this from scratch.

25

Mar

2014

Another week split between figuring out how to do the path generation and helping Brad with the REANNZ SDN.

We built a nice vlan capabale switch, and then built several more versions of it and got annoyed as they all failed to work properly on the pronto. But we got there in the end.

I also have been implementing the redundant path pair code, but it is proving more and more complicated all the time. I have what I am convinced is a correct algorithm but actually turning it into functioning code is really awkard.

18

Mar

2014

My week has been split into two main projects, the first was continuing to work on my graph theory problems. In the end I decided since it the requirement of minimising flows is a specifically openflow problem it could be worth spending a bit of effort on, so I believe I have sorted out all the kinks in that and have just started implementing it again.

The other part was helping Brad set up the REANNZ SDN. We have it all connected now, and it passes packets, but we met a few snafus. The main one being that when you try to use the "normal" forwarding action the prontos seems to ignore the vlan for the first packet. So with 2 prontos along the path we were losing 4 pings (and 4 arp messages in each direction) every time we started over. The only solution to this was to either not use vlans or not use the normal action.

Brad also plugged things into the wrong ports and lots of nonsense like that. He might tell you it wasnt his fault, but he never writes weekly reports...