I got AES going! 802.15.4 uses the CCM* block cipher mode with AES as the cipher (using 128bit keys).
This was really difficult! Small changes in the input resulted in wild changes in the output (as you would expect!). I wrote a tool with libtomcrypt to create some reference data (something that I would then try to replicate with the CC2538 libraries). In the end, I managed to make this work.
I also got this going with Wireshark (which highlighted an endianness issue with my libtomcrypt tool). Wireshark will now accept my AES encrypted messages which gives me confidence that the implementation is correct.
The implementation is a bit hacked together at the moment (as I was hacking about to get it going) however tidying it up should be simple. Then I can move on to decryption.
As for the security, I have no idea how well this implementation would perform against side-band power analysis attacks. This is something I'm not even going to consider trying to defend against at this stage! In the future, however, this might be important.
Spent some time investigating unusual data to make sure it wasn't
occurring in the amplet tests. Monitoring of management connections
found one that was sharing a physical link with a test connection. Some
HTTP tests were having unusually long run times which appears to be
caused by the server infrastructure and not our own DNS lookups or
Started testing a new version of the amplet client for deployment on the
NZ mesh. Ran into an issue with our large schedule files where a count
variable was too small and overflowing. Results were collected fine, but
most of them were being thrown away when reported. Split all report
messages into smaller chunks as a short term solution that doesn't
require updating the server side code (still aim to move to something
smarter like protocol buffers).
Made no useful progress on getting Chromium to fetch/modify headers
without crashing. There are newer versions I need to try, but they
require more recent versions of libraries than I have.
I've been working on getting a openflow switch testing platform ready this week, the plan is to have this complete by the end of the month ready to run some tests.
I've decided to use OFLOPS-turbo as a starting point and add support for the newer versions of OpenFlow. I've opted to replace the openflow connection handling with libfluid-base, which will handle the OpenFlow handshake and echo's as required regardless of the OpenFlow version in use. The modules can then use their library of choice such as rofl or libfluid to construct and parse OpenFlow messages. I've also been fixing other issues with the code such as high CPU usage due to polling on gettimeofday.
I made good progress with the application. I have set up most of the back end, particularly the values to be used to query the database. I can request flow information for the network as a whole or on a per-device basis. I have also added the ability to assign a name to a MAC address. I was thinking of using the hostnames of the hosts but it would be kind of hacky given only the MAC address, plus reverse-DNS would have to be set up on the local network.
Although sFlow was useful since it supported MAC addresses, the sampling isn't ideal for getting an accurate picture of the devices behaviour on the network. I found a program called softflowd which listens on an interface and is capable of exporting Netflow version 9. It looks like Netflow V9 is going to be the only protocol that can be used with my application since it supports everything I require, in particular MAC addresses, direction information and application information. Currently I inspect the interface index to determine direction in my parser script, which means that the SNMP must be configured to assign these values which isn't likely on a home network. I hope to get port mirroring set up on a switch and use softflowd to construct and export Netflow V9 packets to my collector. This won't affect the application itself because the database schema will be the same.
Continued testing and tweaking the event grouping in netevmon. My main problem was the creation of seemingly duplicate groups in cases where further (usually out-of-order) detections were being observed for events that were members of an already expired group. Eventually I tracked the problem down to the fact that the event was being deleted when the group expired so the later detection was resulting in the event and its parent group being re-created.
Started looking into methods for determining whether an event is "common" or "rare", so that we can allow users to filter events that occur regularly from the dashboard. This meant I had to change our method of populating the dashboard -- previously, we just grabbed the appropriate events in the pyramid view and passed them into the dashboard template, but now we need to be able to dynamically update the dashboard depending on whether the common events are being filtered or not.
Added some nice little icons to each event group to show what type of events are present within the group without having to click on the group. The current icons show latency increases, latency decreases and path changes.
Have fully integrated the new packet scheduler. This works great! I have the default number of link-layer retransmissions enabled (3) and this gives great reliability to my ping demo. I also added some debug GPIO so I knew what was going on when and things line up well enough (4 - 5ms of error between the coordinator and device slot timers ain't bad!).
I'm now looking into the AES-CCM* based security supplicant. I already had a supplicant that accepted messages to encrypted/decrypted but it didn't actually do any tranformation on the data. The CC2538 has an AES core built in where you pipe data in one end and encrypted data comes out the other. TI provides a library to do this, so I'm in the process of wiring it up. This is actually fairly simple once I figured out how the CCM mode operated (and understood the parameters L, M, a, m and c..!). I expect that once I get encryption working that decryption will be trivial (as you simply input the cipher text rather than the plain text and run the same algorithm on the data).
I'm pretty pleased with progress to date!
Supporting Q-in-Q is hard apparently, the 'work' in workaround isn't a thing.
Have discovered a lovely behaviour where in the Openflow1.3 spec, ethertype should be able to be seen for frames after vlans. The plural is important, but what I'm seeing is that only the first two ethertypes get sent through, so matching on 0x0800 works for one vlan, but a second one makes it break. Need to decide whether we submit a bug report based on this finding. This was found in my testing environment and in hardware (pronto).
Table hopping doesn't work due to ovs and vswitchd not recirculating frames, a process where headers get sent to vswitchd, vswitchd does some work, throws it back at ovs to get the rest (done with mpls). Could hack ovs to make it do this, but not really a lot of point since it wont run on hardware. May change my mind on this.
Set up clean install DNS server using 'bind9' (sudo apt-get install bind9)
Configured the local configuration (named.conf.local) to include the zone "pan.com"
copied the db.127 file to db.pan.com and edited to have the relevant information (zone, address of device and address of coap server as IPv6)
configured /etc/resolv.conf to include the NS address and zone.
restarted bind9 with "sudo /etc/init.d/bind9 restart"
dig pi.pan.com returns address "127.0.0.1" - correct
dig coap.pan.com returns no results - correct is beef::1
however it's looking for an ipv4 address (...IN A... in the query)
specifying dig AAAA coap.pan.com returns address "beef::1" - correct
DNS server is now configured, not set up for chaching results and haven't tested for outside name resolution.
Re-installing radvd broke address configuration to node RPi.
restarting radvd on the gateway assigns an address to the node.
- they can both ping each other after this.
- need to setup dns address advert section.
Setting the dns on the node to point at the IPv6 address of the gateway and using dig fails, but using the ipv4 address (assigned by the attached router) works for finding the address of the coap server via dig. Need to work on getting the ipv6 name server address working.
Gateway RPi is still losing its static IP on boot (have to take the interface down with ifdown and up with ifup before the address takes) could be to do with boot order..
I continued fixing/improving RouteFlow. I moved my mutlitable fastpath into the multitable RouteMod rather than having a separate option for it. With Brad's help we got this working on the Pica8, turns out we needed to turn on combinated mode otherwise VLAN tagging can be overzealous. I found and corrected a bug with the inter-switch link (ISL) implementation for the multitable case. I also found an issue with ARP neighbourhood entries being added before the interface is properly up (has received a mapping packet) which results in the flow not being installed. This requires some retry system, which RouteFlow currently does not have.
I also looked at OFLOPS-turbo this week. It appears some fixes/improvements have been made over the original OFLOPS in addition to support for 10Gbit netfpgas. However, there are still some aspects that need work such as supporting newer versions of OpenFlow, reducing CPU usage, some places in the code spins on gettimeofday() rather than sleeping etc. It appears I might have to make some large changes to add the functionality that I want.
The IS0 thesis chapter update was completed with the new validation results and checked.
The chapter on load balancer prevalence thesis chapter was updated to rely on validation from the per destination load balancer analysis with limited flow IDs. Basically this analysis relies on finding diamond divergence points confidently without discovering all successors or nested load balancers confidently. This vastly reduces the amount of traffic required while still providing some key information.
A literature search for more relevant papers was carried out. Discussion and references were added to the related work thesis chapter as appropriate.