I've had MCPS going for a while now, however the structure I came up with initially made it difficult to pass frames between layers. For example it was up to the top layer to build the frame for sending - a bad way to do things!
The way it works now is that the MAC manages all of the frame memory and the upper layers simply pass in data. This is obviously a much nicer way to do things.
I also spend some time implementing some OS timers (which I am currently modifying to allow the addition of timers from an interrupt context). This allows the code to use callbacks, from a background context, to pass data around in a more event driven manner. While I like the idea of owning my own OS, ubiquiOS is available as an alternative which is likely to be far more stable. I'm still a little undecided but a few more bugs in my OS and I'm sure I'll jump ship. The intention will be to keep an OS abstraction layer so ubiquiOS can be swapped out and replaced as needed.
I've totally restructured the mac into several pieces - MCPS, MLME, coordinator and packet scheduler. The packet scheduler manages queues and timing of sending and receiving packets. MCPS is working, however no CCA is currently done so collisions are possible. I plan to implement beacons before I worry about CCA.
Patrick and I worked together to get 802.15.4 packets into wireshark. Patrick and Richard managed to get the full 6LoWPAN packet decoding going by ignoring the FCS in the MAC packets.
Updated the HTTP test to not include time spent fetching objects that
eventually timed out, as all that was doing was recording the curl
timeout duration. Instead, we need to report the number of failed
objects, last time an object was successfully/unsuccessfully fetched,
and possibly try to update the timeouts to match those commonly used by
Switched the meaning of "in" and "out" for throughput tests, as
somewhere along the way this got switched. This involved updating
existing data in the database as well as the code that saves the data.
Added a bit more information to log messages to help identify the
specific amplet client that was responsible, as it was becoming
confusing in situations with multiple clients running on the same machine.
Started adding an interface to download raw data from the graph pages.
Partway through it was taking longer than expected, so took a slight
detour and wrote a standalone tool to dump the data from NNTSC.
Further additions have been made to the chapters on trace based Doubletree and Megatree. Preparation for this involved carrying out validation on the simulators. This involved ensuring that each trace was processed once. There were also dumps of small numbers of traces to ensure that the algorithm used behaved correctly.
The packet event based simulator IS0 is also being validated. Some errors have occurred and investigation is underway to determine the causes of these.
Went to see Brad on friday about getting nProbe installed somewhere so I can collect some traffic information. Had to wait for a reply from nTop with the license to activate it. Saw Brad again this morning and got it activated, so now I can experiment with nProbe and carry on with the application development.
Unfortunately the only open source collector I could find that supports databases only supports NetFlow version 5, but I want to target IPFIX as it is standard. I will look further into lpicollector since it is open source, and see if there is a practical way to get application information from each flow.
I am going to organise a meeting with Bob Durrant from Statistics to discuss my project and get his opinion on approaches etc. and Bob has suggested an expert on text mining that would be glad to discuss my project.
For these upcoming meetings and in general it was suggested that I make brief summaries on the information I am working on so I created a program to process my log files and output a word frequency count for each set of input files that belong to a process; the output is in .csv format so it can easily be loaded into excel to be sorted, plotted etc. It can also take specific date as input so the logs for one specific day can be filtered.
Next I will use my program to make frequency tables for each of the processes in my log files (where the process would be considered the author in text mining), then I will take a few lines from each process to give a short summary of each process's output which will give a summary of the type of files I'm working with.
I've primarily been spending my time working fastpath into RouteFlow. This has not been as simple as I expected because RouteFlow itself does not control the rules on the OVS between the lxc that's doing the routing and the controller machine ('dp0'). After a discussion with Brad I choose to leave this separation. And I have written a tool which parses the configuration and sets up dp0 correctly. I've modified the Vandervecken setup scripts to include the necessary rules, which are loaded from the RouteFlow configuration files.
I've read over a paper recommended by Richard Nelson, "What You Need to Know About SDN Flow Tables" which has some interesting points about the performance and correctness of switches available currently. Both of which have some substantial room for improvement.
Successfully set up 2 Raspberry Pis in a lowpan network and exchanging packets.
Used the Texas Instruments CC2531 USB dongle in conjunction with the smartRF software to perform packet sniffing of the 802.15.4 network.
Captured packets are in .psd format which cannot be opened natively in Wireshark.
There have been a few home built converters (.psd to .pcap) however so far they have mostly created malformed packets so are untrustworthy.
Spent a week working on the amp-web matrix. First task was to add HTTP and Throughput test matrices so we could make the BTM website available to the various participants. This was a bit trickier than anticipated as a lot of the matrix code was written with just the ICMP test in mind so there were a lot of hard-coded references to IPv4/IPv6 splits that were not appropriate for either test.
Updated amp mesh database to list which tests were appropriate for each mesh. This enabled us to limit the mesh selection dropdowns to only contain meshes that were appropriate for the currently selected matrix, as there was little overlap between the targets for the latency, HTTP and throughput tests.
This week and last I implemented a few basic filters, configured the time series graph scales and added a table displaying the most prominent flows and the associated device.
Turns out pcap doesn't store direction information so I couldn't implement the ingress/egress filter. pcap-ng supports direction though Libtrace doesn't support this format yet. Should be easy enough to implement when I have the information available.
Recently I have been planning my presentation which is this Wednesday.
Plan on seeing Brad either this week or next week to get NetFlow configured so I can start working with real data.
Woops. Missed a report last week...
Last week I wrote the radio driver which is able to send and receive 802.15.4 data packets. The Contiki software was useful to see which registers I needed to initialise to get data moving. There's plenty left to do but it's enough to get running.
This week I have started writing the MAC layer. I want to start with the PAN coordinator, specifically with beaconing. Once I have beacons, I can start work on the three scan types - active, passive and energy detection. This will allow the PAN coordinator to survey the site before starting a network.
Once I have the surveys and beacons working, I can get association working.