User login

Blogs

23

Jan

2018

This week I looked into why the timing results that I collected vary by a large amount for sequential executions of the experiment. After analysing packet traces, I found that TCPDUMP wasn't showing all packets on the primary logger (logger on the primary path directly connected to failed link). When a link is taken down TCPDUMP will crash silently. I was getting packet information by piping TCPDUMPs output to a file. TCPDUMP uses a buffer when it outputs packet information. The buffer doesn't get fully processed when the app exits from a link going down. This was causing fewer packets to be displayed on the output and thus making the results vary by a large margin, depending on the state of the buffer and how many packets were captured. We can fix this problem by running TCPDUMP with the --immediate-mode flag, which disabled the output buffering.

Next, I looked at mitigating the effect of the location of the loggers in the network on recovery time. Because I was using packet timestamps, the location of the logger may affect the recovery time calculation based on the size of the recovery or detour path. I fixed this issue by re-implementing the loggers in libtrace and creating a recovery time calculation app with libtrace. This app will take two packet traces and use the pktgen timestamps to calculate the recovery time. I am also getting packets lost by looking at the pktgen sequence numbers between the traces on the two loggers (last packet on primary logger and first packet on secondary logger). Using the pktgen fields also allows us to place the two loggers on separate switches. Previously we had to have them on the same switch to account for clock differences in the virtual switches.

After these modifications, I have recollected the recovery time stats on my lab machine. I am currently in the process of finishing off the cleaning and re-graphing of these stats.

16

Jan

2018

This week I finished the implementation of all parts of the optimisation mechanism of the proactive controller, which optimises installed recovery paths. While testing I have found several issues and bugs which I have resolved. I then refactored the controllers to clean them up and remove any duplicated code where appropriate. I have also ensured that the controllers have adequate comments.

I then modified the recovery time simulation framework to allow specifying the state for each individual topology. My recovery time framework uses a wait state file which defines the state the network switches (flow and group rule elements), have to be in before starting the failure experiment. This was defined as a per controller file before the modification. Differences in topology may make the wait state mechanism inaccurate. It would have also only worked on topologies that are similar which restricts our simulation testing.

At the end of the week, I have cleaned up, processed and graphed the stats that I have collected which look at the recovery time of the 3 controllers on 3 different topologies using 5 failure scenarios.

08

Jan

2018

This week was a shorter week. I finished implementing the modifications to the proactive controller to allow optimisation of recovery paths. The proactive controller will still use protection to quickly recover from failures at the switch level, however, the controller is made aware of links that have failed. The failed link information is then used to update topology information of the controller and re-compute optimal paths in the network.

18

Dec

2017

Continued trying to get better performance out of the ndag protocol. Most of my time this week was actually spent trying to resolve some issues with the DPDK packet counters, which were not proving to be particularly accurate or useful. After a lot of debugging, I realised that my DPDK packet generator software was reading and clearing the stats for all of the DPDK interfaces, not just the one it was transmitting on. Once I blacklisted the interface I was using to receive reflected ndag packets on from the packet generator, my numbers started making sense again.

Eventually I found that the best way to improve performance was to just enable jumbo frames and therefore send less multicast packets. With jumbo frames, I've been able to capture, encapsulate, export and subsequently receive a sustained 10G with no real issues.

On the OpenLI side, I've improved the API around my decoding code so that I can easily perform the main tasks that libtrace and libpacketdump will require. Specifically, I now have API functions for getting the timestamp, record length and start of the IP capture field from an ETSI-encoded packet. I've also added a "get next field as a string" function for generating nice tracepktdump output. Hopefully with all of this in place, I should be able to integrate the decoder into libtrace by the end of the year.

15

Dec

2017

At the start of this week, I finished modifying the proactive protection recovery controller to retrieve and use the network topology dynamically when computing paths through the network. I then worked on improving the path splicing algorithm by allowing it to consider more nodes from the primary and secondary path as potential path splice destination and source nodes. I finished implementing a new version of the path splice computation algorithm which seems to be able to produce paths that are more minimally overlapping and closer to the destination node.

I also added a few more failure scenarios and network topologies to my testing framework. I then modified the failure scenario parser/loader and files of my simulation framework to allow specifying different Tcpdump logger locations for various network topologies. The loggers are used to calculate recovery time under certain failure scenarios. This modification was needed as different controllers will produce different recovery paths. Several other fixes and changes were also performed to fix bugs and problems found.

At the end of the week, I started working on extending the proactive controller to receive link failure notifications and optimise the pre-installed recovery paths based on the new network topology.

15

Dec

2017

Finished updating the RouteEntry code to save and load to/from a raw buffer and put it into testing. The time to transmit a million routes between processes dropped from 20+ seconds to less than a second. Memory usage also shrunk massively such that all million routes could be sent at once, which was not possible when using pickle.

Made some other improvements to memory usage by no longer storing copies of the routes where not required (and it's easier to ask a peer to resend them), and by storing routes in simple lists when a more complicated data structure isn't actually required.

The BGP router still uses more memory than I would like, and takes longer to do things than I would like, but it is now much improved. Half of the time taken is now spent waiting on ExaBGP to send me all routes, and there are still plenty of inefficiencies to fix in the way filtering and fixing of routes happens.

15

Dec

2017

Continued to investigate the performance of the BGP router and discovered that a very significant amount of time is spent pickling and unpickling routes to send between processes. Using a newer version of python allows me to modify how messages sent through multiprocessing queues get serialised, so I experimented with using protocol buffers, json, and a few other approaches to see what might work. Everything I tested was still too slow or memory intensive when dealing with a million route entries. Decided that the best approach was to store all the routes in one "bytes" field in a protocol buffer message (rather than having each route as a distinct part of the message) and to write just the relevant parts of the route entry straight into the buffer. Started work on implementing this.

15

Dec

2017

Found and fixed a couple of small bugs in the pickling implementation of the RouteEntry class used in the BGP router. Updated the unit tests to check that prefixes and route entries could be correctly pickled and unpickled. Also found and fixed various small issues that didn't show up in testing, but did when exposed to real BGP implementations and a more diverse set of routes (more tests required!).

Started work on getting useful performance numbers around how long it takes to process and distribute routes, so merged the testing prometheus code I had previously written and expanded it to cover more of the interesting parts of the code. Every time routes are touched (importing, exporting, filtering, etc) the time that took is recorded and available to query. So far it looks like most of the time spent is outside of my main functions and in other places - moving data around between processes.

15

Dec

2017

Had another look into building an AMP test using headless Chrome to measure web (particularly YouTube) performance. I can get my code building within the Chrome build system, but I really want to create a library that I can link my own code against, and nothing like that gets built. They claim it does, but those libraries are missing most of the symbols I need, so still need to look into this further.

Found and fixed an issue around amplet2-client cert fetching failing after a certain number had been issued. Turned out to be a simple type issue and comparisons were being made using the wrong type, thus sorting incorrectly and returning an incorrect certificate.

Spent some time writing installation documentation for the AMP server components and adding it to the github wiki.

15

Dec

2017

Built and released new Debian and Ubuntu packages for amplet2-client, ampy, and ampweb.

Found and fixed a few issues in the netevmon email filtering that were caused by the incorrect types being used to make comparisons. Built and installed new packages in one deployment for testing.

Started work tidying up the C modules recently created for some of the more memory hungry parts of the BGP router. Was able to simplify it in a few places, reorganising code to be able to replace custom code with existing library functions, and shrink the amount of memory required slightly further again.