User login

Blogs

28

Mar

2015

Started work on the RPi adaption of base station:
Working on getting the required tools and libraries in the latest version of Raspbian in order to use the 802.15.4 interface. So far I have installed the libnl library and iwpan tools form source, yet to test if they are working with the interface.

The RPi board needs to be set up for ease of access so I don't have to continuously switch peripherals from PC to RPi, needs static IP (working for the moment but breaks when a network connection is shared to the board) and vncserver on boot. This is setup using tightvnc for the moment with a viewer on a win7 laptop.

Next is to setup a sniffing program (with usb dongle) and attempt to use the add-on board to transmit packages to test if the add-on is accessible.

27

Mar

2015

The earlier half of this week was tied up with other paper commitments. I've now been set up with access to comet along with some snapshots of VMs I will be using to test controller and topologies (OF, Mininet). I've had a bit of a play with some example topogies and controller applications.

I've also managed to get hold of Craig Osbornes code, which appears to have some good concepts and some code I will be able to recycle and adapt for my own project.

My next step is to design the type of topology that is ideal for an ISP environment which also works with other deployments such as University Campuses. Primary factors are ability to scale and be redundant. AAA services also need to be considered, not so much accounting, but the other two.

My first point of contact for this was Chris Browning from Lightwire, but turns out he is going on leave for the next few weeks, so I'll likely consult Scott Raynel to see what current topology is like, see if he knows what Chris' direction was etc. My next point of contact is likely to be Brad Cowie.

I also need to seek out how provisioning of hardware, physical or virtual, works as so far it's been pretty Python code.

25

Mar

2015

Made the HTTP test more consistent with the other tests when reporting
test failure due to name resolution or similar (rather than the target
failing to respond). Also added the option to suppress parsing the
initial object as HTML and to just fetch it.

Found and fixed a problem with long interface names being used inside my
network namespaces. Linux appears to allow longer interface names than
dhclient can deal with, so I've had to shorted some of my more
descriptive interface names.

Spent some time measuring the quantity of test data to get an estimate
of how much database storage will be required for the new test clients.
Also looked at how much throughput test data is likely to be used
(multiple TB of data a month), and possible locations that might be
suitable to test to without hitting rate limits or interfering with
other traffic.

Continued to check various parts of the process chain to make sure that
they perform robustly when bits go away, networking is lost, machine
reboots etc.

24

Mar

2015

It was decided to simulate an extra TTL value, using the IS0 event driven Internet simulator, to achieve a better level of optimisation of this parameter. Several runs were initiated and the data files are being collected.

Work has begun on the Megatree chapter. A description of Megatree was written and some of the appropriate graphs have been accumulated in the thesis document. Megatree avoids discovering the same load balancer more than once. When a load balancer divergence point is encountered in a new trace stored information indicates where possible convergence points might be in terms of hop count and IP address.

The collection of per destination ICMP traces is still underway. Two types are being collected at sequentially in time. They are gathered with seven or 128 flow IDs. Of interest is if they count the same number of load balancer divergence points and what amount of traffic is involved in doing that.

24

Mar

2015

This report will cover the progress to date. I've also attached the proposal I submitted in case anyone is interested in reading that.

In order to make a start there are two areas that need to be considered. The first, and likely most important, is to consider how the provisioning flow is going to work. IEEE 802.15.4 has some build in AES security stuff as well as different channels (much like Wi-Fi has channels). At the link layer there needs to be a way to negotiate some kind of connection which will involve scanning for a channel both nodes can talk on as well as some kind of key exchange to encrypt the communication. Above that, in the networking layer, we probably want some kind of authentication and encryption to ensure that nodes are who they say they are. Finally, the application layer will need to support some kind of key store so users can add the new node before trying to connect it to the network.

The other side that needs to be considered is which platform the project should be built on. The CC2538 hardware has already been decided. Not only does it have IEEE 802.15.4 radios, it is also supported by all the major WS platforms I could find. The other convinient aspect is we already have development boards for it. The software, at this stage, will either by RIOT OS or ContikiOS. The issue with ContikiOS is that I've seen Brad and Isabelle working with it and it seems like there are a number of challenges along the way. RIOT OS is lacking a gateway software which would make it difficult to get any of the application layer stuff working. A gatteway is also useful for sniffing network traffic, but that can be done with any of the nodes and a serial port.

I think I'll start by reading through the IEEE 802.15.4 standard to gain some understanding of how the channels and encryption work. Those are likely to be key to the project. I may as well take the oppourtunity to draw some diagrams at the same time (which will help with report writing!).

23

Mar

2015

Back after a week on holiday. Spent a decent chunk of time catching up on emails, mostly from students having trouble with the 513 libtrace assignment.

Continued tweaking and testing the new eventing code. Discovered an issue where the "live" exporter was operating several hours behind the time data was arriving. Looks like there is a bottleneck with one of the internal queues when a client subscribes to a large number of streams, but still investigating this one.

prophet started to run out disk space again, so had to stop our test data collection, purge some old data and wait for the database to finish vacuuming to regain some disk. Discovering that we had a couple of GBs of rabbit logs wasn't ideal either.

While fixing the prophet problem, did some reading and experimenting with suffix trees created from AS paths with the aim of identifying common path segments that could be used to group latency events. There doesn't appear to be a python suffix tree module that does exactly what I want, but I'm hoping I can tweak one of the existing ones. The main thing I'm missing is the ability to update an existing suffix tree after concatenating a new string rather than having to create a whole new tree from scratch.

20

Mar

2015

Wrote my proposal. Have changed the project slightly to get away from the idea that it is strictly about NFV and making a vBNG.

19

Mar

2015

Installed and configured nntsc and postgres on the new test machine in
order to keep a local copy of all the data that will be collected. This
also has the ability to duplicate the incoming data and send it off to
another rabbitmq server for backup/visualisation.

Made some minor changes to the amplet client for the new test machine
that required building new packages, which are now available in a custom
repository to keep them separate from the others. Installed the amplet
client on the new test machine and configured it to run multiple
clients, testing the network namespace code.

Continued to test some of the infrastructure code around
starting/stopping amplet clients, creating network namespaces etc. Found
and worked around a few small problems where tools were not properly
namespace aware, or would not create files belonging to a namespace (but
would happily use them if they already existed).

17

Mar

2015

Updates were carried out on the Doubletree simulator chapters. A graph of the distribution of trace lengths was generated after running warts analysis for the CAIDA download Traceroute data set. The same also needs to be done for the MDA data set that was also used for simulation.

Run on the IS0 simulator for the few to many scenario and all the levels of the stages variable was initiated and start TTL of 8. This is for comparison with the sources windows data that also used this start TTL setting. I should possibly do some more start TTL levels to get a better optimisation.

A per destination MDA cycle is running on PlanetLab. It will take two weeks and will compare the use of a small number of flow IDs with a large number. The ability to find load balancers will be compared, but not the ability to find all successors as this is already known to differ markedly.

Some minor changes were made to the Churn paper. In particular a graph was updated to be more readable.

16

Mar

2015

Added graphs for the HTTP test to amp-web, which helped reveal a couple of problems with the HTTP test that Brendon duly fixed.

Updated amp-web to support the new eventing database schema. Managed to get eventing up and running successfully, with significant event groups appearing on the dashboard and events also marked on the graphs themselves.

Gave my annual libtrace lectures, which seemed to go fairly well. Already I've got students thinking about and working on the assignment so they are at least smart enough to start early :)

Continued to develop a tutorial for my event classification app. Found plenty of good examples of different types of events so now it is just a matter of writing all the explanatory text for each example.