Converted the DNS, TCPPing, traceroute and throughput tests to report
data using protocol buffers, and updated the scripts used by nntsc to
extract/parse the report messages. Updated the build system to
automatically build all the appropriate files from the .proto definition
Wrote some unit tests to make sure that the data being put into the
protocol buffers was the same as the data coming out and that optional
fields were appropriately present/absent.
Started collecting some more data on slow HTTP tests, dumping full
result data to try to see if there are any patterns around what objects
are slow to fetch, and which part of the transfer process is slow.
This week I continued work on oflops, primarily I spent the time on getting TLS working. Creating certificates was a relatively painless part, particularly after Brad explained that we were creating a CA switches and another for controllers. This allows switches to talk controllers, but no switches to talk to other switches or controllers to other controllers.
Once this was working I was hitting a deadlock all the time, this turned out to be a bug in libevent which is fixed in the next release, so I switched to using 2.1.5 beta. With some small modifications to libfluid this appears to be working correctly now.
This week I also got Vandervecken deployed on the WAND production SDN network - with Brad's help of course. The initial attempt on Wednesday failed due to the timing issues on start up - which I was aware of but did not expect to be a problem. I fixed these by adding some caching to delay some rules until they could be installed. On Friday we successfully deployed this however with broken IPv6. This IPv6 issue is due to a hardware bug, for now we are fastpathing all IPv6 traffic which is working well.
I was hoping to be ready to test this week with oflops. However invested a lot of time in RouteFlow/Vandervecken instead. As such I will be spending next week writing my oflops tests and then after that hopefully should be able to move onto preparing my full proposal.
Given the lack of support for Netflow v9 and IPFIX, I have decided to make my application compatible with Netflow v5, v9 and IPFIX. To do this, I simply check the database for MAC addresses. If there are none, then I resort to using source IP addresses to differentiate between MAC addresses. As a result, the device-specific info is not as accurate compared to using MAC addresses. Concerning the user interface, I am gradually adding new features week by week.
I received feedback on two chapters and made changes based on this. The chapters were data collection and load balancer prevalence in the Internet.
Continued digging into the unknown traffic in the day-long Waikato trace I captured last week. Diminishing returns are starting to really kick in now, but I've still managed to add another 9 new protocols (including SPDY) and improved the rules for a further 8.
Worked on a series of scripts to process the results of running the Plateau detector using a variety of different possible configurations (e.g. history and trigger buffer sizes, sensitivity thresholds etc). The aim is to find the optimal set of parameters based on the ground truth we already have. Of course, some parameter combinations are going to produce events that we have never seen before so I've also had to write code to find these events and generate suitable graphs so I can use my web-app to quickly manually classify them appropriately.
Spent a fair bit of time helping Yindong with his experiments.
Now that I have something that works with Q-in-Q, I've progressed on a couple of security considerations I've had in relation to MAC spoofing and keeping my L2 segregated.
I have functionality where the source MAC that gets handed to the RADIUS server is set by my controller, and maps back to an S/CVID mapping. When the traffic comes back, the original MAC is restored so the client doesn't get confused.
I am also considering having this same functionality with traffic going to a router.
I also need to figure out whether I will be passing both VLANs to the router, or none, or one, and how I will get VLANs back into the Handover Port.
Also fixed a "bug" where I didn't send packet_in's back out, so that's back in the code.
Have stopped work on the project, and am instead working on my seminar. I have one at Virscient on Tuesday, one in class on Wednesday, one at WAND the following week and then the final seminar the week after that.
Before I present, I really need to get some actual evaluation data. My title, "Creating a secure and low power internet of things platform", suggests I'm concerned with security and low power. I've got the security (I followed standards including RSNA, so it's at good as 802.11 assuming I implemented it correctly). For the low power, I'm not going to have time to implement serious power savings before my presentations (maybe I will before the final one, who knows). My goal here instead is to waggle GPIO pins to tell me which state the CPU is in, and then use the datasheet values to calculate energy assuming I add power savings.
At the same time, this profiling will tell me where most of the power isused nd where to focus my power saving efforts!
I've been sick the past week, so aside from the presentation I haven't been able to do this profiling. Maybe later today.
Built and tested new amplet client packages for the wheezy portion of
the New Zealand mesh, including the ASN lookup fixes from last week.
Deployed them on the local test amplet to run.
Chased up a few issues that had come to light recently, including using
the correct credentials to sign the cert used by apache when serving
client keys, the HTTP test incorrectly reporting "-1" data rather than
None/missing, and a few minor compiler warnings.
Started to move the amplet test reporting away from handcrafted
structures to Google protocol buffers. This will take care of a lot of
the boring bits around encoding variable length data and makes it a lot
easier to report only the data required to describe a test result
(rather than including unused fields). So far I have updated the ICMP
test to use protocol buffers and it has been a pleasantly easy experience.
Writing up Network Schema report, running into problems with knowledge gaps in regards to RA's.
Californium coap server is working but resources are added prior to the server execution. Handling of requests is done within the resource so adding new resources to the server is problematic.
Could look into getting a CoAP protocol library and manually writing server application.
The application can now sort by device MAC, direction, ports and time (monthly, daily and hourly). It displays the usage, protocols and usage timeline. Next will be to create visuals to give an overview of all the devices impact on the network, and give the user the option to change from graphs to tables in order to get more detailed statistics.
I am looking at changing my collector (yet again) back to Netflow since I can see in my application that the amount of data uploaded is about 4 times that of downloaded data. This is because of sFlow sampling every 1/1024 packets on each interface i.e. 1 uplink interface vs 24 local interfaces. Netflow version 9 isn't possible given the hardware available so I looked into software which could export Netflow version 9 for me, and mirror the desired interfaces to the software. I found a program called softflowd which does this, but for some reason the MAC addresses are always zero. I have tested it on multiple machines with different collectors. I have contacted the creator of softflowd to see why the captured MAC addresses are always zero, and if it wouldn't be too much work for me to enable it to do so. Other than softflowd, there are no open source probes which can export Netflow version 9.
If this can't be done, I'll switch to using IP addresses to identify the local devices. While this won't be as useful due to dynamic IP's, it will make the application a lot more flexible.