User login

Blogs

06

Oct

2014

Finished and submitted my PAM paper, after incorporating some feedback from Richard.

Fixed a minor libwandio bug where it was not giving any indication that a gzipped file was truncated early and content was missing.

Managed to get a new version of the amplet code from Brendon installed on my test amplet. Set up a full schedule of tests and found a few bugs that I reported back to the developer. By the end of the week, we were getting closer to having a full set of tests working properly -- just one or two outstanding bugs in the traceroute test.

Got netevmon running again on the test NNTSC. Noticed that we are getting a lot of false positives for the changepoint and mode detectors for test targets that are hosted on Akamai. This is because the series is fluctuating between two latency values and the detectors get confused as to which of the values is "normal" -- whenever it switches between them, we get an erroneous event. Added a new time series type to combat this: multimodal, where the series has 2 or 3 clear modes that it is always switching between. Multimodal series will not run the changepoint or mode detectors, but I hope to add a special multimode detector that alerts if a new and different mode appears (or an old mode disappears).

02

Oct

2014

Spent last week on leave, getting my balance down :)

01

Oct

2014

Results from the last non event based simulator run have been analysed and the graphs added to the PhD conference slide set. The graphs show simulation of many sources to few destination using the Traceroute multipath detection algorithm (load balancer) data. It contains effects of both the global inter-monitor data and the local intra-monitor data.

The churn paper has been further refined for PAM and submitted. This involved adjusting headings, adding keywords, moving table captions, setting letter paper and adding some address data.

Another run of the black hole detector has been finished and the results downloaded.

30

Sep

2014

Turned a lot of the scheduling web interface code into templates that
can be reused between creating and updating tests. They were similar
enough that most of it can be reused, with only a few minor changes
specific to each view.

Fixed up some small bugs in the ASN query code to make sure that all
addresses in the path are fetched (paths shorter than the initial TTL
weren't querying for the ASN of the final hop). The cache will now be
cleared regularly during operation and will also tidy up properly after
itself on program end. Started work on replacing the ASN fetching using
DNS with the TCP bulk whois for the standalone traceroute tests too.

Spent some time applying patches and building old bash from source to
update the old amplets against the new bash vulnerability. These
machines are really due for a software refresh!

30

Sep

2014

This week the focus has been moved to writing my honours report.

I've started from my mid-term report and I am reusing some parts of the introductory chapters which have changed little. Like a lot of honours students the word count for tracking my progress is up at http://wand.net.nz/520/.

24

Sep

2014

Spent some time setting up a properly scheduled throughput test between
machines in the real world. While doing so, found out a few things about
certificate management that may not have been full thought through yet.
The certificates used for connections to the control socket (for
starting the remote end of the test) are currently only configured to be
clients, they can't act as a server without an extra setting being
enabled. Also, the server currently tries to validate client hostnames,
which relies on reverse DNS and won't be effective in most real world cases.

Added caching to the ASN lookups that use the bulk TCP interface, using
a data structure that looks similar to a radix trie. Looks to work well
and fast. May also try to use this in the temporary test processes too,
to store addresses and ASNs while they get applied to a particular set
of traceroute data (it would more more easily remove duplicates from the
query).

23

Sep

2014

Some simulator runs finished on Wraith and I downloaded the results. There are some problems with Wraith currently, but my results were unaffected. The simulator runs on Spectre are not quite finished. These will be included in the slides for the internal university PhD conference.

I have done more work on the slides for the university conference. This included reworking the introduction to Doubletree to make it briefer. It also included adding more results from the Doubletree and Megatree simulators.

I registered an abstract for PAM and am continuing to refine the paper that will be submitted soon.

22

Sep

2014

I used an interview in Sydney to procrastinate for most of this week...

Spent some time looking into HTTP/CoAP proxies. The Java-based Californium project seems the most well-established (although there seems to be little to compare it to). I got it up and running and it works nicely as a proxy, but it isn't completely seamless (i.e. resources are accessed on remote CoAP servers via addresses such as http://proxy_host/[coap_host]/resource, as opposed to http://[coap_host]/resource). I would have liked to have more time to investigate setting up a transparent proxy, but then again, I don't think direct HTTP/CoAP proxies are an intended use case for the Internet of Things - rather, an indirect proxy featuring caching (load balancing for the embedded network) and returning "prettier" data to the client would be more useful. At least with this experience I'll have something to discuss in my report.

A disadvantage of the Java proxy is that it runs separately to Contiki/6LBR, which has full control over the network interfaces. This means the CoAP proxy must be run on separate hardware to the 6LBR. In an ideal system this proxy would run as a Contiki process.

19

Sep

2014

Finished developing and testing stream / collection selection in netevmon.

Added support for the HTTP test back into NNTSC. We only store basic statistics from the test, i.e. number of objects, bytes, servers and the time taken to fetch everything, as opposed to the previous schema which tried to store detailed information about each individual fetched object. Managed to get my own amplet VM to do some testing and have been happily running HTTP tests for most of the week.

Replaced the pika code in NNTSC to use asynchronous connections rather than blocking connections. This should make our rabbit queue publishing and consuming code a bit more robust, especially if a TCP connection breaks down, and it also appears to have made our backlog processing much faster.

Spent a decent chunk of time chasing down a bug in the AMP HTTP test that would cause it to segfault if you tested to certain sites. After delving deep into the flex code that parses the HTML on the fetched pages looking for other objects to fetch, we eventually found that the buffer being provided to store the URL of the found object was not big enough to fit all the URLs we were seeing.

17

Sep

2014

A new round of data collection has been kicked off in the black hole in load balancers detector. The data from the previous round has been processed.

Data from the Doubletree simulator that uses MDA data has been collected and made into graphs along with some of the Megatree data. Four of these graphs have been included in the slide set for the internal PhD conference.

A draft slide set for the PhD conference has been completed. The proposed title is “Doubletree and Megatree simulations with control packet cost analysis".