User login

Blogs

18

Mar

2014

A longer run to find black holes in load balancers using our version of fast mapping has been carried out. Instead of 1000 addresses, two lots of 5000 were used. In each case there was an initial traceroute MDA run followed by eight Paris traceroute runs and in turn a final MDA run. The resulting warts files were scanned using the upgraded analysis program which when it finds a short Paris trace, compares the initial and final MDA runs for route changes and checks the Paris end point to see if it is within a load balancer. Some special cases which are flagged are when the destination is inside a load balancer itself and when the Paris stop point is at the convergence point of a load balancer. These special cases explained the possible black holes found in the 1000 destination data, however some true appearance black holes in LBs were found in the 5000 destination data. It may be possible to continue this on planetlab however it is probably necessary to use ICMP probes, and this would reduce the number of per flow load balancers seen, requiring more probing to see the same number of load balancers.

Initial runs were carried out using valgrind callgrind. It took me a while to realise that I could not use my Internet Simulator bash script as callgrind appeared to analyse the bash shell in this case. When I ran the program directly I still couldn't start with no instrumentation and then manually activate it. When I did this there were no events collected. This seems to be a callgrind bug. Another possibility is to try the programmatic activation of callgrind instrumentation by recompiling the simulator with an activation macro included. It turns out that making memory map nodes in the build method starts almost immediately, and I don't know how long it takes before it gets to making links. Running valgrind without a delayed start can help answer this. The objective is still to delay for a day and then run instrumentation for two days.

18

Mar

2014

Last week was a bit slow, spent ages double-checking the values in the spreadsheet and making sure the correct ell ranges and formulas were being used. Found quite a few mistakes, so it wasn't entirely a waste of time. Then, spent the rest of the week updating the detector probabilities script with the new values. This took forever too since there are several detectors and two methods, each with their own probability values and event grouping, etc.

17

Mar

2014

Started to work (again) on my honours project which is a continuation of my summer research work. Submitted a blurb about my project and re-read through a couple of the academic papers I had originally looked at to revise where to take the project next.

17

Mar

2014

Spent about half of my week continuing to validate netevmon events on amp.wand.net.nz. After noticing that the TEntropy detectors were tending to alert on pairs of single measurement spikes that were 2-3 minutes apart, I modified the detectors to require a minimum number of measurements contributing entropy (4) before triggering an alert (provided the time series was in a "constant" state). This seems to have removed many insignificant events without affecting the detection of major events, except for some cases where the TEntropy detectors might trigger a little later than they had previously.

Started implementing the new traceroute table schema within NNTSC. Because there are two table involved (one for paths and one for test results), it is a bit more complicated than the ICMP and DNS tables. Having to cast a list of IP addresses into an SQL array whenever we want to insert into the path table just makes matters worse. At this stage, I've got inserts working sensibly and am now working on making sure we can query the data. As part of this, I am trying to streamline how we construct our queries so that it's easier for us to keep track of all the query components and parameters and keep them in the correct order.

14

Mar

2014

Spent a few hours this week looking into 802.15.4, some implementations of it and common protocol stacks, which will be relevant to my honours project this year. Wrote a quick blurb outlining my objectives that will probably have changed significantly by the time I write my proposal next week.

The blurb: 802.15.4 is an IEEE standard for the physical and MAC layers of highly resource-constrained, low speed wireless personal area networks. Protocols at higher layers allow communication with other devices: 6LoWPAN, an implementation of IPv6 over 802.15.4, allows communication with any other IPv6 devices also bridged into the network. The security considerations of technologies such as these are increasingly scrutinised. Existing security standards are not viable on resource-constrained networks, and as yet, no solutions for this scenario have been standardised. This project therefore seeks to evaluate proposed solutions for achieving fully secure communication on an IP-based Internet of Things.

Perhaps instead of security, another aspect I would like to look into is the deployment of HTTP and UPnP on an 802.15.4 network, which would be useful in connecting IoT devices.

12

Mar

2014

Spent first half of the week finishing up events and double-checking up the values and finally got a sufficient sample size for each of the event group categories that I was using. Then, I spent an unfortunate amount of time calculating the values to use for the different belief fusion methods for each of the groups. Turns out that Google Docs is not that smart and doesn't know exactly what I want it to do. So, lots of manual cell address entering, formula rechecking, and whatnot.

Finally, I started updating the eventing script to add the recent changes, e.g. Hidden Markov Model detector. I also added triggers to the eventing script to detect when each of the fusion methods reaches/exceeds a significance probability value (95% currently). This will be used to determine which fusion method is the fastest at declaring which events are significant. Also added code to output the order in which the detectors fired for any event group, since it might be interesting to see if there is a pattern in there.

11

Mar

2014

Thinking about the logistics of polling paths, made me realise that if I want to have paths in place to fail over to means I am going to want to be able to create those paths by using as few openflow rules as possible. So this can be done using two rules per destination per switch, but discovering these is complicated.

So I have been having fun with graph theory. Proving that it was always possible to achieve this with only 2 rules per switch, then trying to find an algorithm that does this tidily. I think my current algorithm is polynomial, though probably not a very nice order of polynomial, and its a bit messy so I am not entirely confident of that fact.

I also started to get worried today that it wont actually find a result at all in some cases. This is all kinda peripheral to my project, so I dont really want to spend too much time on it. But it is interesting and I can probably spin a chapter out of it.

11

Mar

2014

A second run of the Internet Simulator has been started with delayed Valgrind. Valgrind call-graph will be started after a day of running. This is to find out what the simulator is spending most of its time doing as it seems to be taking to long to run. These runs are using less than ten percent of virtual memory each, which is more economical than I thought. Tony McGregor is going to have a look at these results to help determine the problem.

More work has been carried out on our version of fast mapping which uses one round of traceroute MDA before and after 6-8 rounds of Paris traceroute. The analysis program has been upgraded to be more concise and descriptive. Also Paris traceroute has been upgraded to vary the flow ID used from trace to trace within the same scamper process. The process ID was used to generate the source port value. Furthermore a series of data sets has been generated of size 5000 destinations and a run of two cycles using two different data sets has been initiated to test the changes.

It appears that some of the promising results from fast mapping so far weren't actually black holes in load balancers. However new data is required as the problems above before they were fixed made the results less useful.

10

Mar

2014

Added the ability to set the source interface/address in AMP so that all
tests will use the specified source. Individual tests can also be
configured in the schedule to use a particular source.

Updated the sample configuration files to include all the new config
options that have been added recently, with a small bit of documentation
about how they all work.

Started working on getting the new throughput test that Richard wrote to
work with the new remote-server code. The throughput server will now be
started when something asks for it on the control connection. I have
some issues around the control connection blocking or closing early,
making it hard to send/receive the listening port number, but quickly
hacking around that makes the test work - just need to find the correct
and tidy solution to this.

Built a new Debian Wheezy image for the emulation network, for John to
use with his virtual machine testing.

10

Mar

2014

Finished fixing various parts of Cuz so that it should be able to survive postgres restarts.

Started working on NNTSC version 3, i.e. implementing the database design changes that Brad has been testing. Fortunately, the way we've coded NNTSC meant that this was not overly difficult. Both insertion and querying now appears to work with the new code and we've even fixed a problem we would have had under the old system where a stream could only belong to a single label. Now we query each label in turn, so a stream can belong to as many labels as required to satisfy the query.

Also updated the AMP ICMP and DNS table structure to be more efficient in terms of space required per row.

Spent the latter part of the week working on verifying the events that netevmon has produced for amp.wand.net.nz. Found and fixed some netevmon issues in the process, in particular a bug where we were not subscribing to all the streams for a particular ICMP test so we were missing chunks of data depending on what addresses were being tested to. Overall, the event detection isn't too bad -- we pick up a lot of insignificant events but usually only one detector fires for each one so Meena's fusion techniques should be able to determine that they aren't worth bothering anyone about. The few major events we do get are generally reported by most of the detectors.

Gave a couple of lectures on libtrace for 513.