User login

Blogs

28

May

2015

Continued to work on the raw data interface to fetch AMP data through
the website. It took some time to find the appropriate place to deviate
from the normal aggregated fetching used for the graphs, but now with
minimal code changes there is now a path that will follow the full data
fetching used by standalone programs (e.g. netevmon).

Fetching now works for data described by a stream id, following almost
the same path as usual for graphs. To allow some degree of data
exploration and easy generation of URLs it's also important to deal with
data described by the human readable stream properties. I'm currently in
the process of converting a URL with stream properties into a stream id,
and alerting the user to missing properties that are required to define
a stream.

26

May

2015

To get my application using an open-source collector instead of nProbe, I had the idea of writing a Python program to parse the output files by nfcapd. nfcapd has the option to call a program when a new file becomes available, so I call nfdump and read all flows from the new file and output them as comma separated strings. I then pipe this output to my Python program so save to a database. This solution will also let me control the size of the database as well as deleting flows that are older than a certain date.

26

May

2015

This week Brad sliced of a portion of the pronto for me to experiment with, an OVS bridge with some ports attached. The processing resources on the pronto are still shared with the production RouteFlow and Valve instances.

I encountered a few minor issues with rule priorities and VLANs and some bugs with my addition to RouteFlow.

From preliminary testing it seems that the PACKET-IN performance of the pronto is very low. This seems like it is related to exhausting the CPU time on the switch.

I also started looking to see if I could find any other research into this, so far research seems primarily to be targeting the performance of modifying OpenFlow rules.

25

May

2015

Finished off making word count program that outputs comma seperated format: word, occurances, frequency. To help give a better understanding of the document's that are being worked with. Next I think it will be useful to get the counts stats on how many words occur only once, how many occur > 10 etc.

On Friday 22/5 I had a meeting with Bob Durrant from the Statistics department to see get his opinion on possible approaches etc. I now have a better idea of what I'm going to do next firstly ignoring Topic Modeling for now and looking at Clustering, specifically to start with only the bearwall firewall logs to get started then look into comparing multiple types of log files later.

Firstly I will start with a simple version of K-Means and add functionality onto it as I go along e.g. seeing if there is more meaning in an IP address by separating the network and host portions and many other possibilities. Will also need to look into the languages and libraries I have found for this type of work to decide what I will be using.

Also I have been working on my presentation that I will have on Wednesday 27.

25

May

2015

Carried out validation on the trace based simulators and the IS0 simulator. The parts completed have been written into my thesis.

The validation programs for IS0 needed to be updated because of the extra terms that I have included in the analysis. These are sources windows and maximum destinations per AS. Sources windows are groupings of source addresses which are analysed at the same time, before analysing the next group. Limiting the number of destinations per AS makes the analysis a manageable size and maximises spread of analysis across the Internet.

25

May

2015

The current state of 802.15.4 in the linux kernel has support for basic functionality only, send / receive (no scan or associate) and only SoftMAC drivers are available at this stage.
Moving away from low level implementation (drivers) and more toward the higher level interaction (but not quite application layer).
Will be setting up the IPv6 gateway using Radvd for addressing clients (assuming the nodes / gateway are already setup on the same channel / pan id as association is not functional yet).
Another possible solution for automatic addressing would be to use DHCPv6 (IPv6 version of DHCP) but this would require a dhcp server on the gateway and knowledge of address blocks / range.

25

May

2015

Continued refactoring the matrix javascript code in amp-web to be less of an embarrassment. This took quite a bit longer than anticipated because a) javascript and b) I was trying to ensure that switching between different matrix types would result in sensible meshs, metrics and splits being chosen based on past user choices. Eventually got to the stage where I'm pretty happy with the new code so now we just need to find a time to deploy some of the changes on BTM.

Started testing my new parallel anomaly_ts code. The main hiccup was that embedded R is not thread-safe, so I've had to wrap any calls out to R with a mutex. This creates a bit of a bottleneck in the parallel system so we may need to revisit writing our own implementation of the complex math that I've been fobbing off to R. After fixing that, latency time series seem to work fairly well in parallel but AS traceroute series definitely do not so I'll be looking into that some more next week.

22

May

2015

This week saw a good step in progress. I can connect a host, it achieve a DHCP lease and it can talk to a router. Beyond this, it should work, just need to get a VM running that has access to the interwebs, or run a webserver.

I will probably look at expanding my test environment to multiple hosts connecting to the HOP on different VLANs so I can start emulating a network as similar to a real ISP network as possible in terms of customer connectivity to the internet.

I have a few assignments due over the next couple of weeks, as well as the interim report so I'll probably focus on those until the end of the semester, or at least til study week when they're all due.

21

May

2015

I've had MCPS going for a while now, however the structure I came up with initially made it difficult to pass frames between layers. For example it was up to the top layer to build the frame for sending - a bad way to do things!

The way it works now is that the MAC manages all of the frame memory and the upper layers simply pass in data. This is obviously a much nicer way to do things.

I also spend some time implementing some OS timers (which I am currently modifying to allow the addition of timers from an interrupt context). This allows the code to use callbacks, from a background context, to pass data around in a more event driven manner. While I like the idea of owning my own OS, ubiquiOS is available as an alternative which is likely to be far more stable. I'm still a little undecided but a few more bugs in my OS and I'm sure I'll jump ship. The intention will be to keep an OS abstraction layer so ubiquiOS can be swapped out and replaced as needed.

I've totally restructured the mac into several pieces - MCPS, MLME, coordinator and packet scheduler. The packet scheduler manages queues and timing of sending and receiving packets. MCPS is working, however no CCA is currently done so collisions are possible. I plan to implement beacons before I worry about CCA.

Patrick and I worked together to get 802.15.4 packets into wireshark. Patrick and Richard managed to get the full 6LoWPAN packet decoding going by ignoring the FCS in the MAC packets.

20

May

2015

Updated the HTTP test to not include time spent fetching objects that
eventually timed out, as all that was doing was recording the curl
timeout duration. Instead, we need to report the number of failed
objects, last time an object was successfully/unsuccessfully fetched,
and possibly try to update the timeouts to match those commonly used by
browsers.

Switched the meaning of "in" and "out" for throughput tests, as
somewhere along the way this got switched. This involved updating
existing data in the database as well as the code that saves the data.

Added a bit more information to log messages to help identify the
specific amplet client that was responsible, as it was becoming
confusing in situations with multiple clients running on the same machine.

Started adding an interface to download raw data from the graph pages.
Partway through it was taking longer than expected, so took a slight
detour and wrote a standalone tool to dump the data from NNTSC.