User login

Blogs

29

May

2014

Finished most of the ampy reimplementation. Implemented all of the remaining collections and documented everything that I hadn't done the previous week, including the external API. Add caching for stream->view and view->groups mappings and added extra methods for querying aspects of the amp meta-data that I had forgotten about, e.g. site information and a list of available meshes.

Started re-working amp-web to use the new ampy API, tidying up a lot of the python side of amp-web as I went. In particular, I've removed a lot of web API functions that we don't use anymore and also broken the matrix handling code down into more manageable functions. Next job is to actually install and test the new ampy and amp-web.

Spent a decent chunk of time chasing down a libtrace bug on Mac OS X, which was proving difficult to replicated. Unfortunately, it turned out that I had already fixed the bug in libtrace 3.0.19 but the reporter didn't realise they were using 3.0.18 instead. Also, received a patch to the libtrace build system to try and better support compilers other than gcc (e.g. clang) which prompted me to take a closer look at some of the gcc-isms in our build process. In the process, I found that our attempts to check if -fvisibility is available was not working at all. Once I had replaced the configure check with something that works, the whole libtrace build broke because some function symbols were no longer being exported. Managed to get it all back working again late on Friday afternoon, but I'll need to make sure the new checks work properly on other systems, particularly FreeBSD 10 which only has clang by default.

28

May

2014

Further development has been carried out on the warts analysis based MDA (load balancer topology) simulator. Local data is looking reasonable now and a start has been made on coding the global or distributed data processing part. The program is in the process of being debugged. Furthermore it seems like some analysis of the many to few scenario would be a good idea for this work as it would tie in with the emphasis on many vantage points. Factors included so far include various numbers of stages where controller info is made use of, and a window of 500 traces. The frequency of sending control data is also varied.

The Doubletree and Traceroute simulator runs have been carried out for the data from one day of Caida data collection. I am now in the process of producing graphs from it. Factors included many versus few sources, Doubletree versus Traceroute and varying numbers of stages where control data is sent. It seems like a good idea to also reduce the number of vantage points in several stages, and repeat the simulator runs.

28

May

2014

I've been working on implementing empty tick messages, these can be produced every X seconds to assist programs that want to report results in tracetime (wall time).

I had two main choices here to produce these messages, either start a new thread to send these messages or use timer_create() to enter a signal handler. I've opted for a separate thread due to it not being obvious what thread is best to put said signal handler on, a separate thread will find the least busy core and cannot block other threads out of shared resources by holding a lock etc.

Later this will also be easier to customise the timers behaviour such as accounting for skew in the format.

I've also created/updated slides for my honours practice talk.

It's been a busy week with other assignments, so this is not fully integrated with tracertstats yet and needs a configuration option added, however it is functioning.

26

May

2014

I've spent a lot of my time lately working through various bugs with my controller implementation and getting a real feel for the project environment. I've recently pulled all my kvm test hosts across to virt-manager and set them up with serial access, which allows me easy cli access to each of my hosts through the virsh console, which unsurprisingly is proving to be much quicker and easier than accessing them through X11 (hat-tip to Brad for that). Using virt-manager should hopefully allow me to scale management of my hosts as my virtual topologies change and grow much more efficiently.

I have successfully managed to create dynamic flows between a DHCP server and a client host requesting an IP address, using an anonymous unicast like transmission to discover characteristics about how the DHCP server is connected to our virtual switch, i.e. its 'physical' port. Once this dynamic flow is subsequently created, server and client are freely permitted to exchange packets, with the intention being of creating a new dynamic flow to some form of WAN for the client when the DHCP lease negotiation process is complete.

I have been caught up for a few days now fighting with my Ryu controller and OpenVSwitch instance over packet buffering issues. I have been attempting to cast incoming packets seen by the controller into the supplied Ryu packet API to extract their payloads and other interesting information, but for a reason that presently escapes me, any incoming packets are being unintentionally buffered and therefore any off-the-wire data extraction the controller attempts is being truncated. If for instance we attempt to do this with DHCP packets, exceptions are being thrown by the API when we try to unpack the packet payloads struct, and some analysis reveals that the payloads are being truncated after about ~123 bytes. Given that much of the interesting information in a DHCP packet is located at the tail of a payload in the options field, this presents a problem.

I have a few suggestions from other members about how to proceed further past my existing problem. Next up, I'm going to try generate some sample packet data using scapy and pass it through a series of default OpenVSwitch flows (ignoring my controller implementation) to see both how the data behaves and whether the buffering occurs in the same way to achieve the same outcome. Failing that uncovering anything, I'll try test some similar implementations on older versions of OpenVSwitch to see if this problem is perhaps localised to a specific version of any of my environmental components. Other possibilities include investigating DHCP server APIs to see if I can bypass off-the-wire data extraction, and bypassing my DHCP component altogether to focus on the next parts of my project - such as VLAN awareness and AAA database interfacing.

21

May

2014

Test results have been coming in, and are pretty much going exactly as expected.

Packet colouring can find loss within 2 seconds, as you would expect. BFD takes a long time to notice low rates of packet loss, but will detect at least as low as 2% eventually. Basically it has to send in the order of 1/(n^3) packets to detect a packet loss rate of n. That all has brought me to the conclusion that packet colouring combined with the in-built link down detection in openflow is by far the most efficient way of finding loss. Link loss is detected immediately, and everything else is detected in 2 seconds.

The counters on the pronto are still pretty dodgy. Using packet counters with multiple bridges on the one device seems a bad idea. My packets are being handled by one bridge but are being counted towards flows on the other.

I've started building path calculators, I have one which tries to exploit openflow fast failovers, so when a link goes down, it only modifies the path at the point where the link down is detected. This could potentially require the smallest number of flows, but unless your network fits certain topological constrains it probably wont. I'm currently implementing the 2 flows per switch version, which may involve very long paths in certain circumstances (basically longer paths end up being prioritised over shorter paths in a lot of circumstances), and I will implement the patented version too, (unless someone tells me not to---I really have no understanding of patent law) just for a comparison.

21

May

2014

Successfully built Debian packages of the new amplet client and
installed them on a new machine with multiple network interfaces. Spent
some time making sure that all the configuration files ended up in the
right place, and that the init script performed as expected.

Spent a lot of time looking into how well the DNS lookups behaved with
multiple clients running at once, and that they respected interface
bindings when they were set. In general, everything co-existed nicely
and worked but some possible failure modes could bring the whole thing
down. If DNS sockets were reopened due to a query failure then they
would reset to normal behaviour. Started to investigate other approaches
to name resolution - it looks like using libunbound will be the way
forward from here as it also gives us asynchronous queries (synchronous
lookups were becoming time consuming) and caching.

20

May

2014

The black hole detector finished its run and so I attempted to run the analysis that I worked on previously. When I ran this it generated a segmentation fault and so I reran it with permission for a core dump. When I did this it came to the same point and failed to exit. As I was worried about running jobs that get out of control, I ran it on my laptop rather than Spectre and found that the same thing happened. When this happened I closed the terminal window to force the program to end. It however still took a while to end, and when it did I found a valid core dump. It turned out that because the program was using a lot of memory, when it dumped it took a while to say what it was doing and was in fact writing a large core to disk. After dealing with a couple of cases of null pointers, which I found out about from the cores, the program ran normally on Spectre again. I then further reduced the cases that it reports. I excluded cases where the destination was reached in a shorter number of hops which indicates asymmetry of a load balancer. I also excluded cases where the initial and final MDA Traceroutes differed. This latter point should perhaps be revisited to allow load balancer structure to differ i.e. If a black hole still exists at the final MDA trace, the data would not be seen if it has been excluded.

Further investigation of the non flow-ID, load balancing fields data has involved checking that the cases of missed load balancers where load balancers have been seen are actually now simple non load balancing nodes. In the cases that have been investigated this expectation has been confirmed.

The MDA internet simulator based on warts analyses programming has run without crashing to determine probe packet counts where local set information is used to avoid repeated analysis of the same load balancers. The program is now in the process of being debugged and extended to include distributed performance analysis. The analysis has 40 factor combinations which run in one go taking one or two days. It does use quite a bit of memory so will probably need to be run on Wraith as it grows.

20

May

2014

Finished up with the pausing code, it now is working for files and will not loose any data and similarly functions as expected for various different formats, those with native multi-threaded support and those that don't. Packets waiting in the result queues are also being copied so that they wont be pointing to bad memory if pausing a zero-copy format actually closes it such as ring: and invalidates the memory.

Implemented restarting traces for single threaded traces (which I had neglected so far), so a full pause and start cycle is possible. Fixed some other minor bugs and tidy ups such as zeroing structures to play nicely with valgrind.

20

May

2014

Have been trying to get communication working between devices this week, but nothing seems to just work in Contiki quite as you would expect it to. I read in several places (although each using fairly wishy washy methods) that it was possible to use a mote as an RPL border router, which the host PC could tunnel through via a serial connection to connect devices to the wider network/internet. I figured I would give this a go before I jumped into writing code to test that devices were networked and communicating properly. As it turned out, the RPL border router app for the mbxxx platform wouldn't compile out of the box due to missing files (in both release 2.7 and master branches). I worked out that after symlinking to what seemed to be the right files I could get the border router to compile properly, but connecting to it wasn't straightforward either and I eventually gave up in search of other methods.

Found what look to be excellent resources from Jacobs University, in the form of lab handout tutorials/walkthroughs. I followed through one on bridging an IPv6 network (using a different included app than the RPL border router), although the document was from 2009 and for a different platform. Some code in Contiki differs between the platforms: mbxxx seems to lack certain build functions, but I was able to substitute in the ones from the suggested platform. That being said, I finished the walkthrough, but I still don't seem to be able to reach a remote node through the tunnel mote, so I'm pretty confused now. If I continue down this track I may have to look back through past Contiki releases to see whether the code for each platform has changed -- I assume the bridge for mbxxx worked at some point.

Or I could just be completely doing it wrong. But either way, the lack of documentation is maddening...

19

May

2014

Continued the ampy reimplementation. Finished writing the code for the core API (aside from any little functions that we use in amp-web that I've forgotten about) and have implemented and tested the modules for the 3 AMP collections and Smokeping. Have also been adding detailed documentation to the new libampy classes, which has taken a fair chunk of my time.

Read over a couple of draft chapters from Meena's thesis and spent a bit of time working with her on improving the order and clarity of her writing.

Fixed a libtrace bug that Nevil reported where setting the snap length was not having any effect when using the ring: format.