Improvements to the introduction and conclusions in my thesis have been carried out. Details of contributions were added and more discussion of the outcomes of simulation were added to try and tie the story together.
I commented that Megatree does not require repeated destinations to the same extent that Doubletree does.
Fixed the issues with BSD interfaces in parallel libtrace. Ended up implementing a "bucket" data structure for keeping track of buffers that contain packets read from a file descriptor. Each bucket effectively maintains a reference counter that is used to determine when libtrace has finished with all the packets stored in a buffer. When the buffer is no longer needed, it can be freed. This allows us to ensure packets are not freed or overwritten without needing to memcpy the packet out of the buffer it was read into.
Added bucket functionality to both RT and BSD interfaces. After a few initial hiccups, it seems to be working well now.
Continued testing libtrace with various operating systems / configurations. Replaced our old DAG configuration code that uses a deprecated API call to use the CSAPI. Just need to get some traffic on our DAG development box so I can make sure the multiple-stream code works as expected.
Managed to add another two protocols to libprotoident: Google Hangouts and Warthunder.
Put more time into my implementation chapter.
I'm taking a break from writing the standard, as I really need to get my thesis written so it can be reviewed.
I'll be starting on the background chapter next. I'll try to leave the secure provisioning chapter until after I finish writing the standard.
Tried a couple of different approaches to display test schedules for a
site/mesh and settled on a fairly condensed table. The most useful
information is shown in brief, and you can click on a row to expand it
and see the full listing of test arguments, targets, etc. From here you
can also add new tests to a site/mesh, with nice bootstrap modals and
human readable text hiding most of the raw scheduling options, which is
important if we want people to be able to easily update test schedules.
Spent some time working with Brad on the upgrade process for the amplets
running puppet. As part of that I updated the amplet client package with
logrotate configuration, and updated the init script to wait until it
was sure the client had started so that puppet didn't get overzealous
and try to start multiple copies. Also had to track down a few instances
of unusual behaviour to determine that everything was in fact acting as
Started to address the issue of reworking the Introduction and conclusions of my thesis now that the rest of it has had the once over.
Tried to find a topic for the internal PhD conference. I might address stopping values that we generated, and observations on the data collected using these stopping values and a high confidence setting.
Finished the parallel libtrace HOWTO guide. Pretty happy with it and hopefully it should ease the learning curve for users who want to move over to the parallel API once released.
Continued working towards the beta release of libtrace4. Started testing on my usual variety of operating systems, fixing any bugs or warnings that cropped up along the way. It looks like there are definitely some issues with using the parallel API with BSD interfaces, so that will need to be resolved before I can do the release.
Now that I've got a full week of Waikato trace, I've been occasionally looking at the output from running lpi_protoident against the whole week and seeing if there are any missing protocols I can identify and add to libprotoident. Managed to add another 6 new protocols this week, including Diablo 3 and Hearthstone.
Met with Rob and Stephen from Endace on Thursday morning and had a good discussion about how we are using the Endace probe and what we can do to get more out of it.
Spent some time designing a new test schedule for an upcoming
deployment. Did the maths around how much storage is required per test
at the frequencies I want to measure, so hopefully have a pretty good
handle on the amount of storage required to support the schedule.
Investigated how our current database trimming scripts work, and why
they are so slow to remove data from prophets database - turns out that
there hadn't been an analyze run in quite some time (vacuum/analyze had
been "temporarily" disabled). Tried another look at timestamp
partitioning with a generic trigger (that wouldn't require upkeep) and
got it working, but inserting with a trigger is approximately 10 times
Picked up the work I had been doing previously on a web-based interface
to AMP test scheduling. Got it working again and mostly up to date with
the changes in the develop branch, before spending some time reading up
more on some possible visualisation techniques for schedules. Still not
sure there is a useful visual approach to this, and the right answer may
just be to very simply present the data in some sort of list or table.
Updates were made to my thesis draft chapters: background and related work. This was based on feedback that I have received from my chief supervisor.
I found some more papers that I should refer to in related work and discussion. These were found using a search for Doubletree and cost analysis. In this work the authors refer to upgrading Max-Delta and building in Doubletree. In this work all Traceroute topology information is shared between vantage points. Steps are taken to reduce the size of the topology data set and decisions are made on the fly as to which destinations are the best choice for improving topology discovery coverage of the Internet from each vantage point.
Fixed the bug that was causing my disk-writing wdcap to crash. The problem was a slight incongruity in the record length stored in the ERF header and the amount of actual packet available, so we were touching bad memory occasionally. Since fixing that, wdcap has been happily capturing since Tuesday morning without any crashes or dropped packets.
Continued working towards a releasable libtrace4. Polished up the parallel tools and replaced the old rijndael implementation in traceanon with the libcrypto code from wdcap. Made sure all the examples build nicely and added a full skeleton example that implements most of the common callbacks. Fixed all the outstanding compiler warnings and made sure all the header documentation was correct and coherent.
Started developing a HOWTO guide for parallel libtrace that introduces the new API, one step at a time, using a simple example application. I'm about half-way through writing this.
Been a wee while since I last wrote a report. I was in Germany and Amsterdam over the semester break and didn't make any progress on the project.
The presentation went well (in my opinion). The fact that I had a live demonstration that worked was a real asset! I showed a wireless sensor network that could measure temperature, ambient light and also a 'knock knock' sensor representing a door. The various readings were transported over the network to the coordinator which then passed the information to a python program for visualisation.
All of this required that each feature was functional, so I'd say it was a good demonstration of the features I had developed (sending/receiving packets, device association and authentication).
I'm now working on writing a standard for device provisioning. It's very similar to WPS which I think is a good compromise between security and usability. It supports the same PSKs I used initially which gives the developers the flexibility to add additional authenticity checking.
I am unlikely to implement this standard before the report is due, however I do have a personal interest in the project so it will get done some time! Over the break I should have some free time here and there to add provisioning support as well as IPv6.
At the same time as the standard, I'm writing my thesis. According to the word count graph, I'm sitting at about 2,700 words (which is mostly the provisioning standard).