User login

Blogs

22

Jan

2015

Implemented and tested the final features of the event database - updating groups and events as new detections are added to an event. This meant updating the DS score, magnitude, timestamp of the most recent detection, detection count and detectors that fired for each event. The group that the recently updated event belongs to is also updated: the end timestamp of the group and the severity (the highest probability across all events is used). Spent a lot of time testing since I kept double-guessing the correct addition and updating of events. Shane also plans on deploying the eventing code so that it'll be possible to see event groups, etc on the graphs, so this will be useful for debugging too.

Shane needed some accuracy values of DS with and without using magnitude as another source of input for use in his presentation. Started the rerun of the sample dataset against eventing with magnitude enabled and disabled.

20

Jan

2015

Further work has been done on the chapter on load balancer prevalence. Statistics on percentage of load balancer nodes out of all nodes were calculated. Analysis of variance was carried out on the data across all vantage points. Categories included counts of all load balancers, counts of diamond divergence points, per destination data and per destination data excluding per flow cases. Another category was per flow or per destination load balancer nodes including internal nodes to load balancers. Not per flow cases of per destination gave a small reduction indicating that these sets are relatively separate already even though per destination load balancers are strictly a case of per flow. Limiting cases to diamond divergence points gave a large reduction.

20

Jan

2015

Because DPDK timestamps are calculated for a batch of packets based upon the link speed it is quite important too ensure that linkspeed is correct and updated if it was to change. As such I registered a callback for link status changes with DPDK. This works well however the ixgbe DPDK driver has a large (1-3sec) delay before sending these notifications to allow the status to settle (which seems excessive). I also found that ixgbe would not renegotiate connection speed (if it was changed) unless I stopped and started the interface again in DPDK, however this is not a safe operation while reading from the format so I did not include this. Ideally this should be fixed in the DPDK driver.

I've updated the int and ring formats to work again with the new batch read interface. I also fixed a handful of other minor bugs present in the parallel code.

I fixed a rare bug in parallel tracenanon due to unsafe data accesses. And started work refactoring the parallel code, such as adding proper error handling. And documenting the code.

19

Jan

2015

Looked at how to auto configure the IP address of the 6lbr router device on the Ethernet side of the network. Discovered that “smart bridge” mode for 6lbr auto configures an IP address based on the prefix obtained from the "home" router and the MAC address. I now have 6lbr set up as a smart bridge, and from the 6lbr website and logging output I can see that the IP addresses for the 6lbr device and WSN devices are auto configured.

From my Linux machine I can ping to the devices on the WSN network and view the website on the 6lbr webserver. By using the USB dongle and a simple python script to pipe packets through to Wireshark I can view packets being transmitted on the WSN.

Next step is to look into the contikiMAC driver and how to configure the wakeup and sleep times; taking into consideration router timeouts.

19

Jan

2015

Wrote some slides on our latency event detection work for presentation at NZNOG. Had to shrink my original presentation a bit after realising I was sharing our timeslot with 2 other talks, so hopefully we'll all fit.

Experimented with using the Kolmogorov-Smirnov test as a detector for netevmon. I'm currently comparing the distributions of the latencies observed in the last 30 minutes with those observed 30 minutes prior to that. Initial results are somewhat promising, although my current method for evaluating distance between two distributions does not account for the difference between the values -- it just adds or subtracts a fixed amount to the distance depending on which value is larger. This means that a change from 40 to 42ms is just as likely to trigger an event as a change from 40 to 340ms.

19

Jan

2015

This week I have been doing a bit of tidying up.
I have spent some time tweaking the language in my literature review and fixing some of the references that got mangled.
I have also been adding more information to my presentation that I will be making to the Machine Learning group this Tuesday.

I have also fixed the build script for my flow crunching program, which extracts attributes from the flows contained in a libtrace compatible file. This is intended for offline use.

16

Jan

2015

I spent this week working with the new DAG 10X2-S cards. I spent a bunch of time reading through all the documentation and then testing various input buffer sizes at line rate to see capture percentage. I have a feeling that dagsnap (the endace tool provided for quickly capturing packets) isn't designed to be efficient, and I required a 2GB input buffer in order to capture 100% at 10G line rate with 64 byte packets.

I then got back onto libtrace with the dag format. The latest dag API is missing DUCK support (from what I could tell) and so I temporarily disabled that. It doesn't get called in my code, so I'll have to investigate if we need to keep it in. On the other hand, I was able to use a 128MB input buffer to capture 100% of line rate 64 byte packets.

Next week I'll start playing with the enhanced packet processing to try and balance the input stream across multiple CPUs. This will let me test libtrace parallel with 10G input rate. Richard was telling me he isn't able to get 100% capture using 8 cores, so it will be interesting to see how well DAG performs.

I also wrote a little patch with the dag library to get it to run on new kernels. This was a simple matter of replacing a deprecated macro. This change will support kernel 2.6 upwards. Brad submitted the patch to Endace in case they're interested.

13

Jan

2015

I started on the load balancer prevalence chapter. This involved adjusting my measures of per destination load balancing to include per flow as is usual. I think that I will still report the per destination load balancers without other per flow load balancers counted as well. Another statistic that I am currently generating is a count of per flow diamond divergence point load balancers. This will be reported as a percentage of uniquely identified nodes found, and will relate well to my per destination load balancer counts as nested load balancers are not found in this case either.

Another run to find black holes in load balancers was started.

12

Jan

2015

Now working at Virscient on the project. This week I worked alongside Dean, setting up the test environment for programming the devices and creating a simple test network.

We have set up a Linux computer, which is connected to an IPv6 router via Ethernet. The IPv6 router is connected to a server (called "Convincing John") to allow safe connectivity to the internet. On the Linux computer is a python script which creates a tap interface so that we can read and write packets from the WSN to Ethernet and vice versa by bridging the Ethernet and tap interface. On the WSN side we have a cc2538 device with 6LBR acting as a router, which has been modified to output packet information to be printed (for debugging purposes) and written to the Ethernet interface by the python script. The python script is used to read packets coming in from the serial on the WSN and write to Ethernet (IPv6 router), and in the reverse direction reads packets coming in from the Ethernet side and writing to the serial to the cc2538 device.

09

Jan

2015

Over the break my patch made it into the kernel which is nice. I was going to characterise the performance of TPv2 vs TPv3, but ostinato wasn't up to the task (rate control was either line rate, 6 mpps or anything specified under 1 mpps). So I spent some time changing the way packets are sent from ostinato. Richard pointed out that intel cards can hardware rate limited (1 mbit resolution) and this allowed me to add very fine grain control to packet rates in ostinato. The format ostinato uses to move packets around was a little convoluted, but I managed to calculate how many packets to send giving us the ability to send a specified number of packets at a specified rate. The accuracy will be detirmined when we get the 10 gbit DAG cards.

Once that was sorted I started looking for other things to be done to libtrace. I started looking into the Rijndael implementation to try to fix the compiler warnings. Richard came up with the idea of using unions to negate the need to cast between uint8 and uint32. I like this idea because it may be possible to upgrade the library to use uint64 instead (this will halve the calculations it needs). This assumes that calculations can be parallelised in this way.

Also on the to do list is pulling changes from the vanilla libtrace into the parallel libtrace for the DAG cards, and testing libtrace at 10 gig when the new DAG cards arrive.