In the past couple of weeks I've been working further on processing Table Type Patterns and matching rules into these. In terms of processing Table Type Patterns I have extended the fitting of existing flow rules into the pipeline to consider which prior tables a packet is required to traverse in order to get to a table which a rule will fit. Then given that path check that flow rules that will match the required packets can in fact be installed. All of this is currently only considering the match component.
I've also been working on the task of splitting a single table of rules in to multiple tables. This process involves finding which parts of a flow rules accurately predicts its actions, i.e. finding an ideal place a set of rules can be split across tables. This could be useful in order to fit a restricted pipeline, or find further optimisations (Note in an unrestricted pipeline lower priority rules can trivially be place in a later table with a goto linking the tables). One such optimisation I'm working on is reversing the Cartesian product, as is seen in switching any source mac can go to any dst mac, resulting in a n^2 expansion when placed into a single table. As such I try to detect this case once flows have been split and condense down these rules. To get this working well I've found adding fake rules that go out of the in_port help, as openflow will drop these packets and have attempted normalising cases such as when vlans are present such that all untagged packets are promoted to include vlans resulting in more rules overlapping.
With both the processing of Table Type Patterns and Splitting of flow rules I've been working on some basic unit tests. I've found unit tests useful as this code is starting to grow in size to detect bugs etc.
Last week with Brad found a newer version of ofdpa for the Accton and with his assistance we have successfully installed this. I've updated my simplest switch to work on the newer pipeline and install some functional rules. So far I've found somewhat of a version mismatch between ofdpa and indigo agent missing some match fields required to install certain types of rules, however in the case of simplest switch I've worked around this. I've emailed Accton to see if it is possible to get an updated version. Despite this, the version of ofdpa seems very functional and I'm hoping to be able to do most things I want on it.
We've been doing a lot of collaborative work with our ISP partners lately and one thing that has become increasingly apparent to me is the disconnect between what ISPs expect from measurement / monitoring software and what researchers typically have the time and energy to implement.
More specifically, researchers are very good at developing new or improved measurement techniques but they are not so great at developing the necessary infrastructure around the measurements to make it easy for ISPs to deploy and use the new techniques in a production environment. As a result, the ISPs tend to fall back on tried and true monitoring software (e.g. Smokeping) even though our conversations with operators suggest that they would prefer more than just the simple metrics and graphs that such tools provide.
Had a look at using docker to build Debian packages with some help from Brad. Got it working with the amplet2-client packages, looks like it could be pretty useful, especially making sure it's a clean build and only using explicitly defined dependencies. Had a bit of a look at spawning a bunch of shells and performing parallel builds across all Debian/Ubuntu flavours at once rather than doing it in serial. Got it mostly working but I'm not happy with the way I get the built packages on completion - how to trigger it and where to place them.
Spent some time writing some basic documentation about installing and configuring the server software to collect and display AMP data. Most aspects are covered at least briefly now.
Updated the ampweb packages to disable the default test user and to be slightly smarter about dealing with user configuration in general - the package now treats them as actual configuration files and so won't clobber them on install. Rebuilt the packages and pushed them out to the lamp server.
Started work on implementing some simple access control for the amplet client control interface, to limit who is allowed to run test servers or perform other control functions. This is the first step before I implement the ability to push schedules directly to clients.
Short week due to holidays and illness.
Made a few more fixes to the ampweb packages that were installed last week to help fix some issues we noticed during installation, including updating default configuration files to have more sensible values.
Spent some time updating documentation for the amplet client and writing man pages for some binaries that were missing them.
Spent most of the week finishing up packaging all the server-side components for AMP. Had to add a couple of patches as part of the package build process to set logging and pidfile locations, which I should go back and try to fix in a more correct manner. Spent some time installing and uninstalling packages to make sure postinst scripts were working properly to create users, databases, etc. Updated the Lightwire AMP server with the new packages, which went fairly smoothly but did have some issues with missing python dependencies (not in Debian, or too old) that were only required in some code paths.
Also updated the New Zealand AMP mesh to the newest client version. Went pretty smoothly which was nice.
Fixed up the rabbit queues/bindings that I had broken when I accidentally used a queue resource in place of a routing key when exploring the erlang commands. Figured out the syntax I needed to properly create the binding to an existing queue and now have all the configuration sorted.
Spent the rest of the week packaging the remaining software that we deploy on the AMP servers - ampweb, ampy, nntsc, ampsave, netevmon. Quite a few of them have progressed a lot since they were last packaged, so lots of dependencies needed to be updated, users created, databases created and populated, etc. Almost have all the packages built and ready for a test deployment now.
Continuing my work parsing through Table Type Patterns and fitting rules into the described pipelines. In the last week I've refactored the code into more of a library and my script which prints tables uses much of the same code as that fitting rules.
I've also looked at parsing some of the sample Table Type Patterns provided by the working group. These include more complexity when describing matches compare with the ofdpa TTP, most notably they use meta_members within matches and flows. Meta_members add constraints such as "one_or_more", "exactly_one" these can also be nested (by default "all" is used). As expected there are a couple of special cases I've had to handle to make these TTPs work, things which are human interpretable but not easily machine readable. Or cases where the use of fields are used in an inconsistent manner, partly due to vagueness in some parts of the spec.
Finished adding concurrent postgres-influx support to NNTSC, so now we should be able upgrade existing deployments to use influx without having to worry about migrating the existing data from one database to another.
Added an event feedback system to amp-web so that users can click on events and tell us whether the event was useful or not and provide some reasons why that was the case. Hopefully I can use this data to make some tweaks to netevmon and improve the quality of our event detection.
Started reading Stephen's thesis.
Spent some time working on issues a new user had installing the amplet client on a new Ubuntu machine. The Jessie packages almost work fine, though there seems to be a change in behaviour (or a bug) in the libcurl-gnutls library which prevents curl from working with any of our SSL sites. In the end after a lot of chasing things around, the easiest workaround seems to be to build against the OpenSSL flavour of libcurl instead.
Got the amplet client building on Centos again so that I could update a server used as an endpoint for cooperative tests. It is now running throughput and udpstream tests to get sample data for prophet, as well as testing out the new client build.
Started to look over all the old Debian packaging I had done for the server side software to get it all working for the current versions. Updated packages for the simple parts (pywandevent, libnntsc, amppki). Got stuck writing erlang to declare queues, exchanges and bindings on the server because the default tools can't do that (and I don't want to install extra tools). Currently have declarations working fine but in creating the bindings I've created a broken queue that breaks the web interface.
Been working through parsing a Table Type Pattern, these are just JSON objects making them easy to load, however some work is involved with extracting meaning. So far I've been looking at Broadcom's ofdpa TTP, this represents a real word pipeline. The ofdpa TTP has a few typos which I have worked around.
I've been using python and ryu to parse the pipeline and the start of detection of where flows/rules can be added to the pipeline. The ryu app grabs all the rules from a switch and attempts to fit these into the TTP pipeline, this is currently a very simple process where flows/rules can be placed in a table if the match set match the TTP's allowed matches. Order dependencies as well as the instructions associated with the flow/rule are being ignored.
Going forward the plan is to look at loading some of the sample TTP pipelines and further the processing to consider dependencies to reach a table to install a rule.