Spent some time tidying up control messages and configuration when scheduling tests that require cooperation from the server. As part of the previous changes the port number was no longer being sent to tests, which meant it could only operate using the default port - this is now fixed and works for both scheduled and standalone tests. Also fixed up some parameter parsing when running standalone tests where empty parameter lists were not being created properly.
Wrote some basic unit tests for the udpstream test and it's control messages. Fixed a possible memory leak when failing to send udpstream packets. Made sure documentation and protobuf files agreed on default values of test parameters.
Started to install the server-side components of AMP on another machine for a test deployment so that I can use the documentation I write as I go to help build/update the packaging for the most recent versions.
Started adding support for the new AMP UDPStream test to NNTSC, ampy and amp-web. Test results are now successfully inserted into the database and we can plot simple latency and loss graphs for the UDP streams. Next major tasks are to produce a new graph type that can be used to represent the jitter observed in the stream and to get some event detection working.
Spent much of my week chasing Influx issues. The first was that a change in how the last() function worked in 0.11 was messing with our enforced rollup approach -- the timestamp returned with the last row was no longer the timestamp of the last datapoint in the table; it was now the timestamp of the start of the period covered by the 'where' clause in your query. However, we had been using last() to figure out when we had last inserted an aggregated datapoint into the rollup tables, so this no longer worked.
The other issue I've been chasing (with mixed success) is memory usage when backfilling old data after NNTSC has been down for a little while. I believe this is mostly related to Influx caching our enforced rollup query results, which will be a lot of data if we're trying to catch up on the AMP queue. The end result on prophet is a machine that spends a lot of time swapping when you restart NNTSC with a bit of a backlog. I need to find a way to stop Influx from caching those query results or at least to flush them a lot sooner.
Added a latency measure to the udpstream test by reflecting probe packets at the receiver. The original sender can combine the RTT information with jitter and loss to calculate Mean Opinion Scores, which was slightly annoying as (depending on the test direction) the remote end of the test now has to collate and send back partial result data. Updated the ampsave function to reflect the new data reported by the test.
Updated the display of tcpping test information in the scheduling website to reflect the new packet size options. Worked with Shane to update the lamp deployment to the newest version of all the event detection and web display/management software.
Tidied up some more documentation and sent it to a prospective AMP user. Will hopefully get some feedback next week as they try to install it and I can see which areas of the documentation are still lacking.
Spent the better part of this week reviewing literature and thinking about the best starting point and the first issue to tackle.
CacheFlow gives a good outline of building dependency graphs, and the header space work it builds it's solution upon seems like a good approach. That is to look at packet headers as a series of bits rather than a set of fields. If I take this approach I will have to extend the solution to deal with multiple tables. Alternatively the FlowAdapter of normalising to a single table is still a possibility (some type of dependency graph is part 1 of this step anyway). My current thinking is that a dependency graph is likely to result in better optimisations than one big table, which would essentially have to be undone when placed back onto a multi-table switch.
I looked at the state of the art of minimising TCAM entries, most work particularly in the past have been on prefix based optimisation (as is seen in routes etc). More recently OpenFlow has sparked interest in generic TCAM rule optimisation (with out the prefix restriction), currently there appears to be only a single online solution currently. I don't think this is going to be a main area of my research, however if I have the time I could try an existing solution at in the pipeline directly before installing on the switches.
I read a few related papers which focused on spreading rules amongst multiple switches. These tend to be limited to only spreading the policy not the forwarding, and tend to construct subset of rules in such a way that order does not matter. Allowing the rules to be placed in any table along a packets path. This restriction is not needed within the bounds of a single switch as the order of tables is known and there is essentially only a single path. As such while interesting and useful as inspiration for algorithms, without the order restriction it is actually easy to move rules around, lower priority rules can be moved to a later table.
Finished up the first release version of the event filtering for amp-web and rolled it out to lamp on Thursday morning. Most of this week's work was polishing up some of the rough edges and making sure the UI behaves in a reasonable fashion -- Brad was very helpful playing the role of an average user and finding bad behaviour.
Post-release, tracked down and fixed the issue that was causing netevmon to not run the loss detector. Added support for loss events to eventing and the dashboard.
Released a new version of libprotoident, which includes all of my recent additions from the unexpected traffic study.
Marked the last libtrace assignment and pushed out the marks to the students.
After what seems like forever, I've finally managed to put together a new libprotoident release that includes all of the new protocol rules I've developed over the past couple of years. This release adds support for around 70 new protocols, including QUIC, SPDY, Cisco SSL VPN, Weibo and Line. A further 28 protocols have had their rules refined and improved, including BitTorrent, QQ, WeChat, Xunlei and DNS.
The lpi_live tool has been removed in this release, as this has been decommissioned in favour of the lpicollector tool.
Also, please note that libflowmanager 2.0.4 is required to build the libprotoident tools. Older versions of libflowmanager will fail the configure check.
The full list of changes can be found in the libprotoident ChangeLog.
Lots of minor fixes this week. Fixed the commands to properly kill the entire process group when stopping the AMP client using the init scripts. Still need a cleaner way to do this as part of the main process. Updated the AMP schedule fetching to follow HTTP redirects, which was required to make it work on the Lightwire deployment. Fixed the tcpping test to properly match response packets when the initial SYN contains payload. Different behaviour was observed in some cases where RSTs would acknowledge a different sequence number compared to a SYN ACK, and only one of these was being checked for.
Updated all the tests to report the DSCP settings that they used. They are not currently saved into the database, but they are being sent to the collector now.
Set the default packet interval of the udpstream test to 20ms, which is closer to VoIP than the global AMP minimum interval that it was using. Also wrote most of the code for the test to calculate Mean Opinion Scores based on the ITU recommendations, just need to add a latency measure to complete the calculation.
Started working through setting up and running a handful of OpenFlow applications starting with switches. For this I'm trying to keep everything contained within Dockers and scripts to remind me how to run each, as well as keeping things as portable as possible. I'm using mininet to simulate a small number of hosts on each.
I've set up the ONOS docker again which includes a simple switch and a simple mininet network. I've also configured Valve a VLAN switch running from a docker with a VLAN'd networked. I wrote docker files for Faucet a VLAN switch and fixed a couple of bugs which have been merged back into github. Faucet is based upon Valve, however provides an interesting case by being a multi-table application unlike Valve and ONOS's switch.
I've spent sometime manually going through the resulting flow tables from the switches tested and it seems that it is hard to make many improvements to the single table rules such as converting it to a multitable similar to Faucet. A single table switch reactively installs rules connecting two hosts only when both try to talk to each other, if it did not it would result in a rule for each src dst pair, i.e. scales with hosts^2. Where as a multitable switch like Faucet will maintain a learning table and forwarding table, with each host in both scaling 2*hosts. As a result of the reactive single table learning not all src dst pairs are installing making the jump to a src and dst table invalid as this would install rules for src dst pairs that did not exist in the original.
I'm also working through recent literature and re-reading some existing in relation to the problem, I've just started compiling an updated document with possible approaches from literature.
Only worked three days this week -- on leave for the rest.
Continued developing the event filtering mechanism for the amp-web dashboard. Managed to make all of the filtering options work properly, including AS-based filtering and filtering based on the number of affected endpoints.
Changed event loading to happen in batches, so if the selected time range covers a lot of events we will only load 20 at a time. A new batch is loaded each time the user scrolls to the bottom of the event list. This means that we can now replicate the old infinite scrolling event list behaviour on the dashboard, so I've removed the former page.
Added automatic fetching of new events to the dashboard, so the event list is now self-updating rather than requiring a refresh of the whole page to see any new events.
Did some reading around calculating mean opinion scores for VoIP and started to add code to the udpstream test to calculate it both the Cisco way and the ITU E-model way. Neither of them explicitly take into account jitter which seems unusual, my best guess so far is that they count jitter as part of the delay. Other models I've found do include jitter as part of the delay calculation.
Spent some time writing more documentation about installing and configuring an amplet client. Install process, configuration options and schedule file options all get a first draft description, hopefully enough to help people install monitors with minimal assistance but I expect they will need to be expanded. Updated example configuration files to agree with the new documentation.
Various small fixes, including updating the standalone icmp and tcpping tests to print human readable icmp errors rather than printing the type and code, and using Python .egg format in the ampsave packages.
Merged my scheduling parts of the website back into the main branch so that others can start using the features I've added.