User login

Search Projects

Project Members

Shane Alcock admin


Libtrace is a library for both capturing and processing packet traces. It supports a variety of common trace formats, including pcap, ERF, live DAG capture, native Linux and BSD sockets, TSH and legacy ERF formats. Libtrace also supports reading and writing using several different compression formats, including gzip, bzip2 and lzo. Libtrace uses a multi-threaded approach for decompressing and compressing trace files to improve trace processing performance on multi-core CPUs.

The libtrace API provides functions for accessing the headers in a packet directly,
up to and including the transport header.

Libtrace can also output packets using any supported output trace format, including
pcap, ERF, DAG transmit and native sockets.

Libtrace is bundled with several tools for performing common trace processing and analysis tasks. These include tracesplit, tracemerge, traceanon, tracepktdump and tracereport (amongst others).




Finished up the demo for STRATUS forum and helped Harris put together both a video and a live website.

Spent a bit of time trying to fix some unintuitive traceroute events that we were seeing on lamp. The problem was arising when a normally unresponsive hop was responding to traceroute, which was inserting an extra AS transition into our "path".

Rebuilt DPDK and Ostinato on 10g-dev2 after Richard upgraded it to Jessie so that I can resume my parallel libtrace development and testing once he's done with his experiments.

Installed and tested a variety of Android emulators to try and setup an environment where JP and I can more easily capture mobile app traffic. Turned out Bluestacks on my iMac ended up being the most useful, as the others I tried either lacked the Google Play Store (so finding and installing the "official" apps would be hard) or needed more computing power than I had available.




Tested and fixed my vanilla PF_RING libtrace code. I've been able to get comparable performance with the pfcount tool included with the PF_RING release so I'm fairly happy with that. Started working on adding support for the ZC version of the PF_RING driver, which uses an entirely different API.

Helped Harris get his head around how NNTSC works so that he could add support for the Ceilometer data. Set myself up with an OpenStack VM so that I can start working on the web graphs to display the data now that it is in a NNTSC database. Also spent a bit of time writing up an explanation of how netevmon works so that Harris can start looking into running our detectors against the Ceilometer data.

Worked with Brendon on Friday to get NNTSC and netevmon installed and running on the lamp machine.




Wrote some scripts to do some basic exploration of the Ceilometer data to check which collections and series are most suitable for using with netevmon. I've found that around 30-35% of the series for CPU utilisation, network byte rates and disk read/write rates are at least long enough to be worth using -- this works out to ~600 series for each metric, so we'll have a reasonable sample size. The spacing between measurements is more of a concern, as it is very inconsistent. There are some parts of NNTSC and netevmon that assume a fairly constant measurement rate, so these will need to be re-evaluated.

Started adding PF_RING support to libtrace. For a start, I'm just working with the standard PF_RING driver (not the ZC extension) and I've written code that should work with the old API. Once I've tested that, I'll start adding native parallel support using one thread per receive queue in the driver.

Also spent a bit of time planning a paper on parallel libtrace. I'm anticipating the main narrative to be about how we've achieved better potential performance by adding parallelism (depending on the workload and the number of threads), while still maintaining the key design goals of library (e.g. abstraction of complexity, format agnosticism, etc.). We'll show that the same parallel libtrace code can achieve better performance across multiple input formats, i.e. DAG, ring, PF_RING (once complete).




Spent a fair chunk of my week proof-reading, first a document responding to questions about the BTM project, then Dan and Darren's Honours reports.

Tracked down and fixed a bug in parallel libtrace where ticks were messing with the ordered combiner, causing some packets to be sent to the reporter out of order. Also managed to replicate and fix the memory leak bug that was causing Yindong's wdcap on wraith to invoke the OOM killer.

Continued poking at unknown port 443 and port 80 traffic in libprotoident. Most of my time was spent trying to install and capture traffic from various Chinese applications that I had reason to suspect were causing most of my remaining unknown traffic, with mixed success.




Finally released the libtrace4 beta on Tuesday, after doing some final testing with the DAG cards in the 10G dev machines.

Managed to find a few more protocols to add to libprotoident, but am now trying to move towards releasing a new version. Starting having a closer look at TCP port 80 and TCP port 443 traffic in my Waikato traces, with the aim of trying to get as much traffic correctly classified as I can prior to doing an in-depth analysis of what is actually using those ports.

Spent Friday afternoon reading over Darren's honours report and providing some hopefully useful feedback.




The long-awaited libtrace 4 is now available for public consumption! This version of libtrace includes an all new API that resulted from Richard Sanger's Parallel Libtrace project, which aimed to add the ability to read and process packets in parallel to libtrace. Libtrace can now also better leverage any native parallelism in the packet source, e.g. multiple streams on DAG, DPDK pipelines or packet fanout on Linux interfaces.

At this stage, we are considering the software to be a beta release, so we reserve the right to make any major API-breaking changes we deem necessary prior to a final release but I'm fairly confident that the beta release will be fairly close to the final product. At the same time, now is a good time to try the new API and let us know if there are any problems with it, as it will be difficult to make API changes once libtrace 4 moves out of beta.

Please note that the old libtrace 3 API is still entirely intact and will continue to be supported and maintained throughout the lifetime of libtrace 4. All of your old libtrace 3 programs should still build and run happily against libtrace 4; please let us know if this turns out to not be the case so we can fix it!

Learn about the new API and how parallel libtrace works by reading the Parallel Libtrace HOWTO.

Download the beta release from the libtrace website.

Send any questions, bug reports or complaints to contact [at] wand [dot] net [dot] nz




Fixed the issues with BSD interfaces in parallel libtrace. Ended up implementing a "bucket" data structure for keeping track of buffers that contain packets read from a file descriptor. Each bucket effectively maintains a reference counter that is used to determine when libtrace has finished with all the packets stored in a buffer. When the buffer is no longer needed, it can be freed. This allows us to ensure packets are not freed or overwritten without needing to memcpy the packet out of the buffer it was read into.

Added bucket functionality to both RT and BSD interfaces. After a few initial hiccups, it seems to be working well now.

Continued testing libtrace with various operating systems / configurations. Replaced our old DAG configuration code that uses a deprecated API call to use the CSAPI. Just need to get some traffic on our DAG development box so I can make sure the multiple-stream code works as expected.

Managed to add another two protocols to libprotoident: Google Hangouts and Warthunder.




Finished the parallel libtrace HOWTO guide. Pretty happy with it and hopefully it should ease the learning curve for users who want to move over to the parallel API once released.

Continued working towards the beta release of libtrace4. Started testing on my usual variety of operating systems, fixing any bugs or warnings that cropped up along the way. It looks like there are definitely some issues with using the parallel API with BSD interfaces, so that will need to be resolved before I can do the release.

Now that I've got a full week of Waikato trace, I've been occasionally looking at the output from running lpi_protoident against the whole week and seeing if there are any missing protocols I can identify and add to libprotoident. Managed to add another 6 new protocols this week, including Diablo 3 and Hearthstone.

Met with Rob and Stephen from Endace on Thursday morning and had a good discussion about how we are using the Endace probe and what we can do to get more out of it.




Fixed the bug that was causing my disk-writing wdcap to crash. The problem was a slight incongruity in the record length stored in the ERF header and the amount of actual packet available, so we were touching bad memory occasionally. Since fixing that, wdcap has been happily capturing since Tuesday morning without any crashes or dropped packets.

Continued working towards a releasable libtrace4. Polished up the parallel tools and replaced the old rijndael implementation in traceanon with the libcrypto code from wdcap. Made sure all the examples build nicely and added a full skeleton example that implements most of the common callbacks. Fixed all the outstanding compiler warnings and made sure all the header documentation was correct and coherent.

Started developing a HOWTO guide for parallel libtrace that introduces the new API, one step at a time, using a simple example application. I'm about half-way through writing this.




Added anonymisation to the new wdcap, using libcrypto to generate the AES-encrypted bits that we use as a mask to anonymise IP addresses. This has two major benefits: 1) we don't need to maintain or support our own implementation of AES and 2) libcrypto is much more likely to be able to detect and use AES CPU instructions.

Deployed the new wdcap and libtrace on the Endace box. I've managed to get wdcap running but the client that is writing the packets to disk has a tendency to crash after an hour or two of capture, so there's still a wee way to go before it's ready. The wdcap server that is capturing off the DAG card and exporting to the client has been pretty robust.

Started on polishing up libtrace4 for a beta release. There's still quite a bit of fiddly outstanding work to do before we can release. Fixed the tick bug that I discovered a couple of weeks back and finished up the new callback API that I had proposed to Richard a few months back that he had mostly implemented. Ported tracertstats and tracestats to use the new API.




Continued working on the new wdcap. Wrote and tested both the disk and RT output modules. Re-designed the RT protocol to fix a number of oversights in the original design -- unfortunately this means that RT from wdcap4 is not compatible with libtrace3 and vice-versa.

Added parallel support for RT input to libtrace4 -- prior to this, RT packets were not read in a way that was thread-safe so using RT with the parallel API would result in some major races. The RT packets are now copied into memory owned by the libtrace packet before being returned to the caller; this adds an extra memcpy but ensures concurrent reads won't overwrite each other. Using a well-managed ring buffer could probably get rid of the memcpy, so I'll probably look into that once I've got everything else up and running.

Spent Wednesday at the Honours conference.




Continued working on the new WDCap. Most of my time was spent writing the packet batching code that groups processed packets into batches and then passes them off to the output threads.

Along the way I discovered a bug with parallel libtrace when using ticks with a file that causes the ordered combiner to get stuck. Spent a bit of time with Richard trying to track this one down.

Listened to our Honours student's practice talks on Thursday and Friday afternoon. Overall, was fairly impressed with the projects and what the students had achieved and am looking forward to seeing their improved talks on Wednesday.




Continued working on wdcap4. The overall structure is in place and I'm now adding and testing features one at a time. So far, I've got snapping, direction tagging, VLAN stripping and BPF filtering all working. Checksum validation is working for IP and TCP; just need to test it for other protocols.

Still adding and updating protocols in libprotoident. The biggest win this week was being able to identify Shuijing (Crystal): a protocol for operating a CDN using P2P.

Helped Brendon roll out the latest develop code for ampsave, NNTSC, ampy and amp-web to skeptic. This brings skeptic in line with what is running on prophet and will allow us to upgrade the public amplets without their measurement data being rejected.




Noticed a bug in my Plateau parameter evaluation which meant that Time Series Variability changes were being included in the set of Plateau events. Removing those meant that my results were a lot saner. The best set of parameters now gives a 83% precision rating and the average delay is now below 5 minutes. Started on a similar analysis for the next detector -- the Changepoint detector.

Continued updating libprotoident. I've managed to capture a few days of traffic from the University now, so that is introducing some new patterns that weren't present in my previous dataset. Added new rules for MongoDB, DOTA2, Line and BMDP.

Still having problems with long duration captures being interrupted, either by the DAG dropping packets or by the RT protocol FIFO filling up. This prompted me to start working on WDCap4: the parallel libtrace edition. It's a complete re-write from scratch so I am taking the time to carefully consider every feature that currently exists in WDCap and deciding whether we actually need it or whether we can do it better.




Made a video demonstrating BSOD with the current University capture point. The final cut can be seen at

Alistair King got in touch and requested that libwandio be separated from libtrace so that he can release projects that use libwandio without having libtrace as a dependency as well. With his help, this was pretty straightforward so now libwandio has a separate download page on the WAND website.

Continued my investigation into optimal Plateau detector parameters. Used my web-app to classify ~230 new events in a morning (less than 5 of which qualified as significant) and merged those results back into my original ground truth. Re-ran the analysis comparing the results for each parameter configuration against the updated ground truth. I've now got an "optimal" set of parameters, although the optimal parameters still only achieve 55% precision and 60% recall.

Poked around at some more unknown flows while waiting for the Plateau analysis to run. Managed to identify some new BitTorrent and eMule clients and also added two new protocols: BDMP and Trion games.




Short week as I was on leave on Thursday and Friday.

Continued tweaking the event groups produced by netevmon. My main focus has been on ensuring that the start time for a group lines up with the start time of the earliest event in the group. When this doesn't happen, it suggests that there is an incongruity in the logic for updating events and groups based on a new observed detection. Now the problem happens rarely -- which is good from the perspective that I am making progress but it is also bad because it takes a lot longer for a bad group to occur so testing and debugging is much slower.

Spent a bit of time rewriting Yindong's python trace analysis using C++ and libflowmanager. My program was able to run much faster and use a lot less memory, which should mean that wraith won't be hosed for months while Yindong waits for his analysis to run.

Added a new API function to libtrace to strip VLAN and MPLS headers from packets. This makes the packets easier to analyse with BPF filters as you don't need to construct complicated filters to deal with the possible presence of VLAN tags that you don't care about.

Installed libtrace on the Endace probe and managed to get it happily processing packets from a virtual DAG without too much difficulty.




Packet capture is commonly used in networks to monitor the traffic their users are producing. This allows network operators to detect and monitor threats. Libtrace is a library which provides a simple programming interface for the capture and analysis of network packets.

This project aims to implement extend libtrace to support the processing of network packets in parallel, to better libtrace’s performance. We name the solution developed in this project parallel libtrace. An overview of parallel libtrace and the design process is presented. The challenges encountered in the design of parallel libtrace are described in more detail. By designing a user friendly and efficient library we have been able to improve the performance of some libtrace applications.

Richard Sanger




Spent much of my week keeping an eye on BTM and dealing with new connections as they come online. Had a couple of false starts with the Wellington machine, as the management interface was up but was not allowing any inbound connections. This was finally sorted on Thursday night (turning the modem on and off again did the trick), so much of Friday was figuring out which Wellington connections were working and which were not.

A few of the BTM connections have a lot of difficulty running AMP tests to a few of the scheduled targets: AMP fails to resolve DNS properly for these targets but using dig or ping gets the right results. Did some packet captures to see what was going on: it looks like the answer record appears in the wrong section of the response and I guess libunbound doesn't deal with that too well. The problem seems to affect only connections using a specific brand of modem, so I am imagining there is some bug in the DNS cache software on the modem.

Continued tracing my NNTSC live export problem. It soon became apparent that NNTSC itself was not the problem: instead, the client was not reading data from NNTSC, causing the receive window to fill up and preventing NNTSC from sending new data. A bit of profiling suggested that the HMM detector in netevmon was potentially the problem. After disabling that detector, I was able to keep things running over the long weekend without any problems.

Fixed a libwandio bug in the LZO writer. Turns out that the "compression" can sometimes result in a larger block than the original uncompressed data, especially when doing full payload capture. In that case, you are supposed to write out the original block instead but we were mistakenly writing the compressed block.




Continued hunting for the bug in the NNTSC live exporter with mixed success. I've narrowed it down to definitely being the per-client queue that is the problem and it doesn't appear to be due to any obvious slowness inserting into the queue. Unfortunately, the problem seems to only occur once or twice a day so it takes a day before any changes or additional debugging take effect.

Went back to working on the mechanical Turk app for event detection. Finally finished a tutorial that shows most of the basic event types and how to classify them properly. Got Brendon and Brad to run through the tutorial and tweaked it according to their feedback. The biggest problem is the length of the tutorial -- it takes a decent chunk of our survey time to just run through the tutorial so I'm working on ways to speed it up a bit (as well as event classification in general). These include adding hot-keys for significance rating and using an imagemap to make the "start time" graph clickable.

Spent a decent chunk of my week trying to track down an obscure libtrace bug that affected a couple of 513 students, which would cause the threaded I/O to segfault whenever reading from the larger trace file. Replicating the bug proved quite difficult as I didn't have much info about the systems they were working with. After going through a few VMs, I eventually figured out that the bug was specific to 32 bit little-endian architectures: due to some lazy #includes, the size of an off_t was either 4 to 8 bytes between different parts of the libwandio source code which resulted in some very badly sized reads. The bug was found and fixed a bit too late for those affected students unfortunately.




Continued developing code to group events by common AS path segments. Managed to add an "update tree" function to the suffix tree implementation I was using and then changed it to use ASNs rather than characters to reduce the number of comparisons required. Also developed code to query NNTSC for an AS path based on the source, destination and address family for a latency event, so all of the pieces are now in place.

In testing, I found a problem where live NNTSC exporting would occasionally fall several minutes behind the data that was being inserted in the database. Because this would only happen occasionally (and usually overnight), debugging this problem has taken a very long time. Found a potential cause in a unhandled E_WOULDBLOCK on the client socket so I've fixed that and am waiting to see if that has resolved the problem.

Did some basic testing of libtrace 4 for Richard, mainly trying to build it on the various OS's that we currently support. This has created a whole bunch of extra work for him due the various ways in which pthreads are implemented on different systems. Wrote my first parallel libtrace program on Friday -- there was a bit of a learning curve but I got it working in the end.