This week I spent time tidying up code, my Wiki page, and committing code changes to Mercurial. Did a bit of testing on my current set-up to test the reliability of pinging to clients (as packet loss was high). Seems that packet loss occurs very early on and then becomes more stable over time.
There are NS (Neighbour Solicitations) being sent by the gateway onto the network every 10 seconds. I'm still unsure as to why this is necessary and have tried modifying the code to increase the retransmit time (this resulted in a 20 second interval between each NS), and removing it completely but then I was unable to ping the gateway device. Either there is a reason for this functionality or there is a bug.
I have also had a look into microCoAP (CoAP server which runs on Linux) and have it running on my machine, but have not been able to communicate to it from the WSN. Ideally we would want the CoAP server running on Linux.
Updated the certificate signing script to also create and set up
rabbitmq users for new amplet clients. Also, made the user be more
explicit when revoking certificates in the case of ambiguity, and
improved checking of user input for host/cert names etc.
Rewrote the logic in the client around requesting/fetching certificates,
to make sure that the timeouts and wait periods apply to the whole
process, not just to the last step of fetching a certificate.
Started work on building Debian packages for the certificate signing
scripts. Spent some time making sure that all the required files were
included, installed at the correct location and with the correct
permissions. Created web configuration scripts to allow the web side of
things to run almost out of the box.
Spent the week continuing refactoring code, in particular the packet loop.
I moved support for delaying packets (for tracetime playback) into this loop since packets are now read in batches. I've reworked the code to allow messages to be received between packets when they are being delayed. When tracetime playback is not enabled we will not check for messages between packets in a batch for performance reasons.
Libtrace 3.0.22 has been released today.
This is (hopefully) the final release of libtrace version 3, as we are now turning our attention to preparing to release libtrace 4 a.k.a. 'Parallel Libtrace'.
This release includes the following changes / fixes:
* Added protocol decoding support for GRE and VXLAN.
* DPDK format now supports 1.7.1 and 1.8.0 versions of DPDK.
* DAG format now supports DAG 5.2 libraries.
* Fixed degraded performance introduced to ring: in 3.0.21
* DAG dropped packet count no longer includes packets observed while libtrace was not using the DAG card.
* Fixed bad PCI addressing in DPDK format.
* libwandio now reports an error when reading from a truncated gzip-compressed file, so it is now consistent with zlib-based tools.
The full list of changes in this release can be found in the libtrace ChangeLog.
You can download the new version of libtrace from the libtrace website.
This week I successfully setup a network communicating with CoAP. I've modified the client code of contikis er-rest-example.c to send CoAP posts to the 6lbr CoAP server. The CoAP server on 6lbr has been modified to handle the incoming light resource from the sensor nodes and prints out the light sensor value and the MAC address of the sender. From the debugging output the incoming packets appear to be received by 6lbr at irregular time intervals - this will need to be investigated further. Ideally the 6lbr server will receive packets every 10 seconds (this is the time interval set on the sensor nodes).
Generated some fresh DS probabilities based on the results of the new DistDiff detector. Turns out it isn't quite as good as I was originally hoping (credibility is around 56%) but we'll see how the additional detector pans out for us in the long run.
Started adding a proper configuration system to netevmon, so that we can easily enable and tweak the individual detectors via a config file (as opposed to the parameters all being hard-coded). I'm using libyaml to parse the config file itself, using Brendon's AMP schedule parsing code as an example.
Spent a day looking into DUCK support for libtrace, since the newest major DAG library release had changed both the duckinf structure and the name and value of the ioctl to read it. When trying to test my changes, I found that we had broken DUCK in both libtrace and wdcap so managed to get all that working sensibly again. Whether this was worthwhile is a bit debatable, seeing as we don't really capture DUCK information anymore and nobody else had noticed this stuff was broken for months :)
I spent the week tidying the Linux format. This involved all of; making the code style consistent (with at least the DAG format), subclassing the ring format into another file and changing the format data to use the new list style format.
Tidying the code style was probably not needed as we can use an automated tool like astyle but I was just doing it as I went along with the other two tasks.
The new list style reduces the code duplication in the formats. The per thread format data and the main data format previously had some of the same parts. When using the threaded interface, there were a number of fields unused. By putting all of these per stream fields into a list, we can have a consistent interface for both the threaded and non-threaded API. This also simplifies the various API calls as the code is the same for both APIs.
Subclassing the ring format was needed to help maintainability. Ring really is a subclass of native, as a large number of the ring functions make use of the native functions. I moved the ring pieces into a new file, moved the format typedefs into a common header file and created a function to get a reference to the native format. The ring initialiser gets the native format and copies fields into the ring format.
I didn't manage to get the migration completed in the week, so I added the partial work to a branch and pushed that up to Richard's libtrace. The ring format compiles and needs to have the reading added back in. The rest of the functionality was copy-pasted and I assume that will work correctly. The native code needs more work to clean out the ring pieces and convert to the list format.
In hindsight, subclassing native as well as converting to the new list format might have been a bad idea. However, the way the format referencing was done in format_linux.c was not as easy to use as in format_dag25.c. Moving to the way it is done in format_dag25.c was going to be a large enough change anyway so I decided to just add in the list format style at the same time.
This week I looked further into the contikimac driver. Found out that contikimac does not work properly when 6lbr is in smart bridge mode. I programmed 6lbr and the sensor node to use contikimac RDC driver and CSMA MAC driver, but with contikiMAC enabled I was then unable to ping the 6lbr device nor the sensor nodes and unable to access the 6lbr webpage. With further research I could not find anyone who had configured 6lbr in smart bridge mode with the contikimac driver, which makes me think that contikimac doesn't work with smart bridge mode (only router mode). I also spent some time looking into contikis real timer (rtimer) as an alternative method to turn off and on the radio to conserve power, and have started writing code for this. Next week I will be focusing on the CoAP protocol and setting up the network with a CoAP server (on 6lbr) and client (sensor node) which sends requests to the server for sensor data. Aim is to modify the client code on the sensor device to have it post sensor data to the server on the 6lbr device.
I haven't done a blog in ages! Oops.
Since my last blog, I have been refining the DAG format so the new cards work well. I wrote a bash script to configure the memory and hashing on the cards because that's a long process. Basically you tell the script how many streams you want, how much RAM you want to give them and which card to configure and it will go away and do just that.
I validated using ostinato and the dagsnap utility which showed a pretty good balance across all the streams.
The API between DAG 4 and DAG 5 changed slightly. I need to change how the configuration works by using the new CSAPI, and I also had to drop DUCK support. I really have no idea how DUCK works, and all references to the word DUCK have disappeared from the endace documentation. I left a helpful TODO message.
I also spent a bunch of time discussing how we're going to refactor the code base in the formats to better support using two different APIs - parallel and non-parallel libtrace. In the end we decided to use a linked list to hold per stream data. The non-parallel interface will get initialised and points to the first item in the list (FORMAT_DATA_FIRST). The parallel initialisation will add more entries to the end of the list. Any code that is common between the parallel and non-parallel code just get wrapped in a way that passes the correct data structure to the method.
Implementing this took a fair amount of time as there were a lot of references to modify. It compiled, and tracestats still counts the correct number of packets. I guess I should also make sure it performs well to be sure.
Monday was a day off, and I spent most of Tuesday working on my slides and graphs as per Shane's suggestions for my NZNOG talk.
The rest of the week was spent attending NZNOG. The WAND presentation went well, however slightly overtime. There were many interesting talks and I enjoyed my time. I talked to a few people interested in my research after the presentation.