Libtrace 3.0.21 has been released today.
This release fixes many bugs that have been reported by our users, including:
* trace_interrupt() now works properly for int, bpf, dag and ring formats.
* fixed double-counting of accepted packets when using the event API.
* fixed incorrect filtered packet counts for bpf format.
* fixed crash when performing very large reads with libwandio.
* fixed inconsistent behaviour if a bad filter string is used with int and dag formats.
* fixed potential infinite loop when combining filters, the event API and the pcapint format.
* fixed incorrect wire lengths when using SNAPLEN config option to truncate packets captured using the int format.
The full list of changes in this release can be found in the libtrace ChangeLog.
You can download the new version of libtrace from the libtrace website.
Finished up a draft of the PAM paper, eventually managing to squeeze it into the 12 page limit.
Spent a bit of time learning about DPDK while investigating a build bug reported by someone trying to use libtrace's DPDK support. Turns out we were a little way behind current DPDK releases, but Richard S has managed to bring us more up-to-date over the past few days. Spent my Friday afternoon fixing up the last outstanding known issue in libtrace (trace_interrupt not working for most live formats) in preparation for a release in the next week or two.
After discussion with Richard I made initial steps to put the churn paper into PAM submission format. Initially this resulted in 22 pages and the limit is 10 pages. After removing two sections and related discussion, along with graph captions and implementing the use of sub-caption formatting the result was 11 pages.
An alternative to the severe restrictions of PAM is the "International Journal of Computer Networks & Communications" which has a limit of 20-25 pages and is published bi-monthly. I will need to find out more about this Journal and its suitability.
In the non event based double tree simulator, at present the method used for many sources to make use of local stop sets runs for excessively long periods of time. It is already making use of text file lists of source addresses that occur more than once, to limit array sizes of these values. So far I have not thought of other ways to make further improvements to the run times. Fortunately, the benefits of local stop sets are expected to be small with only a few runs occurring from each source. Furthermore it may be possible to examine the savings in a few thousand cases for varying numbers of traces from a given source between 2 and 12, and predict the overall savings for hundreds of thousands of cases.
Built new Debian and Centos packages for the updated libwandevent code,
and used those to build new amplet2 packages for Centos. Debian packages
still need a bit more work to build in my new environment. Deployed a
couple of the new packages to further test some of the new traceroute
reporting for Shane.
Hooked up the rest of the test arguments in the form to schedule a new
test, so they are all now properly added to the database when the form
Filtered the YAML output to only include meshes that are used in the
schedule to reduce file size. Added code to track the time that
schedules were last updated, so that I can return a 304 not modified to
clients that request the YAML when there have been no changes.
Spent Wednesday watching student honours presentations. Well done to our
students who presented.
Spent most of my week writing up a paper for PAM on the event detectors we've implemented in netevmon.
Wrote and tested a script to ease the transition from the current per-address stream format to a per-family stream format. We've already accepted that we're not going to try and migrate any existing collected data for the affected collections, so it is mostly a case of making sure we drop all the right tables (and don't drop any wrong ones).
Spent Wednesday at the student Honours conference. Our students did fairly well and were much improved on their practice talks.
Successful week this week:
I recompiled everything from 6lbr's development branch (i.e. native 6lbr, mbxxx slip-radio and mbxxx 6lbr-demo), explicitly setting PAN IDs to 0xabcd for the mbxxx builds (for consistency with other platforms). I also made sure to reset the MAC and RDC layers of each build to use the "null" drivers, which ensure that packets should be transmitted/received as quickly as the hardware allows (with no regard for power usage). This resulted in excellent quality packet transmission, with the possibility of 0% packet loss over several minutes of pinging the devices, even for fragmented packets. The RTT is a quite respectable 60-70ms for pings split into two fragments.
Using the development branch for the mbxxx platform meant that the memory footprint increased again, which was a big problem for the 6lbr-demo app. To offset this, I completely disabled TCP (since it isn't needed by CoAP), and halved most of the buffers used by RPL. Thanks to gcc's magical -fpack-struct=1 there is now a working CoAP server on the device that can return its uptime in seconds!
Summaries from the three runs of the black hole in load balancer detector have been compiled. Each run covers about 5000 load balancers and a total of 6 very short lived cases have been seen, along with one case of repeated detection and 6 cases of long lived discontinuity. In each case the non load balancer nodes remained unchanged before and after the hourly Paris Traceroute runs.
I have been getting some Doubletree results for small numbers of vantage points using non event based simulation. This is useful for debugging purposes as the turn around time is quick. Now that debugging is completed for most of the analysis modes, I have started some full length runs which should take about two weeks in most cases. I am still deciding if I can do 1, 2 and 10 stage analysis for many to few probing. If I can find a way to make it run in a short enough time, I will set that going as well.
Wrote a script to query prophet's database to extract the Smokeping time series used to generate the event ground truth data used in Meena's masters project, with an eye towards releasing the time series and the associated events that we have identified as a dataset for the anomaly detection community to use to validate and compare new techniques.
Went over all of the events that we had found and updated them to match the current output of our event detection software, which had changed quite a bit since we originally collected the events. There were also quite a few errors and inconsistencies in the significance ratings for the events, so I ended up spending most of my week working on this. Many of the changes were made to events that I had originally classified, so I can't blame the students entirely :)
Spent a decent chunk of Wednesday listening to our students give their Honours practice talks. The good thing is that they all appear to have done some useful work so far, but there's a bit of work to do in terms of making that work accessible to a general CS audience.
Fixed the way I build the data for the YAML output so that the emitter
can better tell which parts should be used as aliases/anchors (which
makes groups of test destinations a lot tidier looking).
Added more dynamic content to the schedule pages using data from the
actual metadata/schedule tables rather than hard coding it to test
layout/behaviour. Sources, destinations are all fetched from the
database, and current test schedules are displayed.
Added API functions to insert tests into a schedule, and hooked it up to
the data coming from the schedule modal form. Most of the data for
creating a new test is now understood and inserted into the schedule table.
Had my two practice presentations this week, which each went well, and received some good feedback from the WAND group. I've been busy with some assignments and job interviews this week so haven't made any further progress.