I have spent the last few weeks working on my thesis. Last night I submitted, which means my honours project has come to a close.
I need to remove ubiquiOS so that I can release the project as open-source. I will license this project under a 3-clause BSD and make it freely available on my github.
Spent most of the week continuing to work on the test scheduling web
interface. The lists of meshes and sites are now the primary entry
points, and if you click through then you have access to the meta data
about the site/mesh and the specific schedule that applies. These can
all be edited to change the names displayed in the results interface,
and schedules that are updated are made available for amplet clients to
The layout and flow are mostly settled now, though will likely be
updated after more frequent use. I've got the base functionality working
and have started adding some of the nice features that help make sure
the right data gets added, or inform the user what is expected. Slow and
I've spent most of my week working on getting the Accton (as4600) up and running with Open Network Linux. I figured out a solution to control the fans, we have managed to set the fans to a quieter fixed speed using the C API exposed by Open Network Linux Platform Infrastructure (ONLP) library.
Among other things we now have a persistent file system with networking for management and automatic running of scripts to slow the fans etc making it possible to reboot the switch.
Once that was sorted I managed to get OF-DPA running on it. The pipeline is very restrictive, more so then I expected. Using a mix of example code and the documentation I managed to get a version of simplest switch working. (Simplest switch forwards between two ports statically). This code is located here https://github.com/openvapour/simplestswitch . One gotcha with OF-DPA is that it uses the OpenFlow group number to mean something special and outputting packets directly out a port does not seem possible rather this needs to be achieved via a group. Group numbers are bitmasked with specials meanings including the type of group and possibly vlan and/or port id. For instance a l2 unfiltered interface group uses the high bits to indicate the type and the low bits the interface, this essentially behaves like an OpenFlow indirect group.
However I've found packet-ins segfault ofdpa, I currently believe this to be an issue in closed the source code.
I managed to hack together an implementation of my final test - the performance of reactively adding flow mods. However having got side tracked on the Accton so still need to tidy this up a bit and ensure I'm getting all the output I could need. And then look towards running tests.
Spent a fair chunk of my week proof-reading, first a document responding to questions about the BTM project, then Dan and Darren's Honours reports.
Tracked down and fixed a bug in parallel libtrace where ticks were messing with the ordered combiner, causing some packets to be sent to the reporter out of order. Also managed to replicate and fix the memory leak bug that was causing Yindong's wdcap on wraith to invoke the OOM killer.
Continued poking at unknown port 443 and port 80 traffic in libprotoident. Most of my time was spent trying to install and capture traffic from various Chinese applications that I had reason to suspect were causing most of my remaining unknown traffic, with mixed success.
The discussion and conclusions section of my thesis was updated to include observed changes in load balancer prevalence. Various other changes were made including moving some discussion to the chapters rather than the final discussion chapter.
A draft of my PhD conference talk was put together. The talk will introduce my thesis including all of its chapters.
Continued working on the oflops tests I've been updating the add flows test and seeing how this performs on the hardware and if it will be suitable for use as part of a reactive test. Rules are added quickly; the entire table can be filled in a couple of seconds, however this should be long enough to show any difference when running an reactive test.
I've been running the packet in and out tests on the switches to check that things are stable, as well as tweaking the script to output results that are easier to deal with.
I've been working a little with the Accton and have loaded on Open Network Linux, however it is very loud because it fans are running at full-speed this is something that needs to be fixed before I can run this during work hours. I have not had the chance to try load anything onto it yet.
Updated the HTTP test to run correctly with the newer libcurl libraries
on Debian Jessie. As part of that I tidied up the overly complicated
main loop, and fixed a parsing bug when encountering "hreflang"
attributes. Also updated the amplet client build system to be more
explicit about which libraries are included so newer gcc doesn't complain.
Added an alternate path through nntsc to use old-style AMP save
functions for test data that isn't in the new protocol buffer format.
Hopefully these are only temporary, but they will be required for a
while during the transition period as we update all the old clients.
Spent some time comparing data between amplets running on Wheezy and
Jessie, as well as with 32bit Wheezy clients to make sure that all the
data is consistent. Most of the tests look good, except the throughput
data doesn't appear to agree between monitors and I still need to keep
an eye on the changes in the HTTP test to make sure that is fine.
Reworked the layout of the schedule webpages to include more information
about meshes/sites, and link them together for easy navigation.
Finally released the libtrace4 beta on Tuesday, after doing some final testing with the DAG cards in the 10G dev machines.
Managed to find a few more protocols to add to libprotoident, but am now trying to move towards releasing a new version. Starting having a closer look at TCP port 80 and TCP port 443 traffic in my Waikato traces, with the aim of trying to get as much traffic correctly classified as I can prior to doing an in-depth analysis of what is actually using those ports.
Spent Friday afternoon reading over Darren's honours report and providing some hopefully useful feedback.
I had my PhD proposal presentation at the start of the week, this went well and I've been given the green light to continue with my research. The weeks prior were spent focused on preparing the proposal and presentation.
In the next couple of months I will finish up the packet-in and packet-out benchmarks. After that I will work on the main topic. I plan to investigate a translation layer between existing OpenFlow applications and the OpenFlow switches/their OpenFlow pipeline. The idea here being to allow existing applications to run on incompatible hardware pipelines and make better use of the all the tables available; such as putting rules into the specialised MAC and Routing tables present on some devices.
I've spent sometime this week reading over the libtrace 4 documentation which Shane wrote which looks good. And making some minor improvements to the oflops-turbo tests, I fixed an issue with loading libraries which turned out to be an auto-tools issues.
The long-awaited libtrace 4 is now available for public consumption! This version of libtrace includes an all new API that resulted from Richard Sanger's Parallel Libtrace project, which aimed to add the ability to read and process packets in parallel to libtrace. Libtrace can now also better leverage any native parallelism in the packet source, e.g. multiple streams on DAG, DPDK pipelines or packet fanout on Linux interfaces.
At this stage, we are considering the software to be a beta release, so we reserve the right to make any major API-breaking changes we deem necessary prior to a final release but I'm fairly confident that the beta release will be fairly close to the final product. At the same time, now is a good time to try the new API and let us know if there are any problems with it, as it will be difficult to make API changes once libtrace 4 moves out of beta.
Please note that the old libtrace 3 API is still entirely intact and will continue to be supported and maintained throughout the lifetime of libtrace 4. All of your old libtrace 3 programs should still build and run happily against libtrace 4; please let us know if this turns out to not be the case so we can fix it!
Learn about the new API and how parallel libtrace works by reading the Parallel Libtrace HOWTO.
Download the beta release from the libtrace website.
Send any questions, bug reports or complaints to contact [at] wand [dot] net [dot] nz