User login

Richard Sanger's blog

31

May

2016

Continuing my work parsing through Table Type Patterns and fitting rules into the described pipelines. In the last week I've refactored the code into more of a library and my script which prints tables uses much of the same code as that fitting rules.

I've also looked at parsing some of the sample Table Type Patterns provided by the working group. These include more complexity when describing matches compare with the ofdpa TTP, most notably they use meta_members within matches and flows. Meta_members add constraints such as "one_or_more", "exactly_one" these can also be nested (by default "all" is used). As expected there are a couple of special cases I've had to handle to make these TTPs work, things which are human interpretable but not easily machine readable. Or cases where the use of fields are used in an inconsistent manner, partly due to vagueness in some parts of the spec.

25

May

2016

Been working through parsing a Table Type Pattern, these are just JSON objects making them easy to load, however some work is involved with extracting meaning. So far I've been looking at Broadcom's ofdpa TTP, this represents a real word pipeline. The ofdpa TTP has a few typos which I have worked around.

I've been using python and ryu to parse the pipeline and the start of detection of where flows/rules can be added to the pipeline. The ryu app grabs all the rules from a switch and attempts to fit these into the TTP pipeline, this is currently a very simple process where flows/rules can be placed in a table if the match set match the TTP's allowed matches. Order dependencies as well as the instructions associated with the flow/rule are being ignored.

Going forward the plan is to look at loading some of the sample TTP pipelines and further the processing to consider dependencies to reach a table to install a rule.

18

May

2016

Had a couple of days off sick this week, however this report also covers the week before.

Continued with work getting sample flows, this time from vandervecken. I setup the vandervecken ISO in a KVM machine and a small mininet network. Like with other apps I'm trying to keep the environment as contained as possible so it can easily be run on any machine. I also ran vandervecken on the new DPDK OVS software switch with some simple iperf3 benchmarks comparing fast-path to the slow-path, this gives me some solid results for the paper to show the improvement and proof of reaching line rate.

I've given more thought to the problem of creating dependency graphs while cacheflow gives a starting point it is still not entirely clear to me at the form a multitable implementation will take. The complex dependencies between tables, especially with apply vs write actions, are still not clear. This comes down to defining what a dependency is, which depends somewhat on what restrictions later possible transformations will have. A dependency between tables could simply be any rule which sends traffic to a rule in the next table and it's dependencies. However it is possible that for some transformations this is not the best definition, for instance if rules are writing an action to the action set it may be possibly to move these to an early table, it which case it is not obvious a dependency exists. The compression to a single table as in flowadapter is still a viable approach to overcoming this part of the problem.

My next step is to take rules independent of tables and dependencies and detect which tables these could be placed in in a new pipeline. This involves reading in a hardware description for which TTP still gives a good starting point, despite vary few vendors having released TTP's. In fact I'm only aware of a TTP for ofdpa and a couple samples released with the spec.

03

May

2016

Spent the better part of this week reviewing literature and thinking about the best starting point and the first issue to tackle.

CacheFlow gives a good outline of building dependency graphs, and the header space work it builds it's solution upon seems like a good approach. That is to look at packet headers as a series of bits rather than a set of fields. If I take this approach I will have to extend the solution to deal with multiple tables. Alternatively the FlowAdapter of normalising to a single table is still a possibility (some type of dependency graph is part 1 of this step anyway). My current thinking is that a dependency graph is likely to result in better optimisations than one big table, which would essentially have to be undone when placed back onto a multi-table switch.

I looked at the state of the art of minimising TCAM entries, most work particularly in the past have been on prefix based optimisation (as is seen in routes etc). More recently OpenFlow has sparked interest in generic TCAM rule optimisation (with out the prefix restriction), currently there appears to be only a single online solution currently. I don't think this is going to be a main area of my research, however if I have the time I could try an existing solution at in the pipeline directly before installing on the switches.

I read a few related papers which focused on spreading rules amongst multiple switches. These tend to be limited to only spreading the policy not the forwarding, and tend to construct subset of rules in such a way that order does not matter. Allowing the rules to be placed in any table along a packets path. This restriction is not needed within the bounds of a single switch as the order of tables is known and there is essentially only a single path. As such while interesting and useful as inspiration for algorithms, without the order restriction it is actually easy to move rules around, lower priority rules can be moved to a later table.

27

Apr

2016

Started working through setting up and running a handful of OpenFlow applications starting with switches. For this I'm trying to keep everything contained within Dockers and scripts to remind me how to run each, as well as keeping things as portable as possible. I'm using mininet to simulate a small number of hosts on each.

I've set up the ONOS docker again which includes a simple switch and a simple mininet network. I've also configured Valve a VLAN switch running from a docker with a VLAN'd networked. I wrote docker files for Faucet a VLAN switch and fixed a couple of bugs which have been merged back into github. Faucet is based upon Valve, however provides an interesting case by being a multi-table application unlike Valve and ONOS's switch.

I've spent sometime manually going through the resulting flow tables from the switches tested and it seems that it is hard to make many improvements to the single table rules such as converting it to a multitable similar to Faucet. A single table switch reactively installs rules connecting two hosts only when both try to talk to each other, if it did not it would result in a rule for each src dst pair, i.e. scales with hosts^2. Where as a multitable switch like Faucet will maintain a learning table and forwarding table, with each host in both scaling 2*hosts. As a result of the reactive single table learning not all src dst pairs are installing making the jump to a src and dst table invalid as this would install rules for src dst pairs that did not exist in the original.

I'm also working through recent literature and re-reading some existing in relation to the problem, I've just started compiling an updated document with possible approaches from literature.

20

Apr

2016

Started looking at the things translating OpenFlow rules to fit new pipelines. The first part of this is to understand the types of rules controllers are installing and looking manually at what possible changes could be made. The first step of this is collecting runtime traces of a number of controllers working in realistic networks. So that we can identify rules that scale with hosts, vs one-time setup rules etc.

As such I've worked on my quickly hacked together passthrough OpenFlow controller and reworked threading to use a processing thread with a publish, process architecture, rather than spawning a new thread per message. I've then used libtrace to record the OpenFlow conversations. I've also added simple support to try and group similar sets of rules and count their frequencies.

Next week I will be compiling a set of test OpenFlow applications and collecting traces and flow tables.

13

Apr

2016

Arrived back in the weekend and spent sometime catching up on things I had missed. I spent the majority of the week working through the suggestions from supervisors on the OpenFlow packet_in and out paper. I spent some time considering where my PhD is going, and what the best starting place would be.

I also spent an afternoon working through a performance related libtrace DPDK issue, reported on github. Which now means we use default values provided by the device, which is a new option in more recent versions of DPDK. I also confirmed that DPDK 2.0 appears to work. Others still appear to be working on porting libtrace to newer versions of DPDK.

11

Mar

2016

Received feedback for the paper, I've worked a little bit on some of this.

Filled out my PhD progress report. Had a talk with Richard about this he is concerned that my proposed approach might take to long before I start tackling the key issues. As such this is likely to be discussed more next meeting.

Styled the patch manager I've been working on with bootstrap and added some basic documentation ready and other tidying. It now has a name OFCupid and is available on github https://github.com/wandsdn/OFCupid.

Had some interest in libtrace with the DPDK 2.2 library (currently support up to 1.8), it seems that they are working on updating the code to support this.

I'm away for the next three weeks.

07

Mar

2016

Focusing on multi-table pipelines this week. I've started by creating a passthrough OpenFlow controller allowing me to intercept and rewrite messages. For this I used the familiar libfluid library, while this works it may not be ideal due to its locking of both read and write of a connection at the same time. This can deadlock when both connections receive a message at the same time and then try to write to the other connection with is locked due to the receive. Currently I spawn a thread per message to work around this, however this is expensive.

I've also started looking at the Pica's multitable pipeline, with can include the MAC matching table and Routing on top of the standard ACL table (where OpenFlow rules are typically installed). These tables can be placed at either a higher or lower priority than the ACL.

In order to test these tables functionality I decided to focus on the MAC table with a modified versions of ryu's simple switch. In order to test this I setup mininet with two bridged networks (in the same subnet), each of which is connected to one port on the Pica. Off each bridged network any number of hosts can be created --- with all packets that traverse between these networks being switched by the Pica.

Ryu's original version of simple switch assumed a single host per port, however in the mininet configuration there are multiple. This in consideration with the matches and actions available in the MAC table lead me to a solution to match ARP packets at a high priority and send these to the controller, and install eth_dst rules beneath that.
I found MAC table flows had to strictly match an eth_dst and vlan_vid as documented, however could optionally match ethertype also. I also found a possible bug, where installing a rule could with send to controller would replay the last packet in, in the case if simple switch this resulted in attempting to install that same rule again and resulted in a infinite cycle (more investigation is needed).

I've started initial thinking and working towards translating these rules from the single table exposed by simple switch into multiple tables. I created a sample ryu implementation that uses multiple tables. And have started working through processing the matches to detect cases where a translation could be made in the passthrough controller.

At the start of the week I, finally, sent a draft of the slow-path benchmarks paper to my supervisors and spent a couple of hours doing a little bit of tidy up.

01

Mar

2016

Worked a little bit on the OpenFlow patch manager again. I added support to save and load configuration and tidied up handling of multiple switches so that only registered dpids are used rather than the first in approach. This does also mean multiple switches can be supported from a single instance, however it does not have any inter-switch smarts.

Watched a couple of the apricot talks, some were quite interesting particularity some of the lightening talks. Also got a tiny bit side tracked looking at multi-path TCP, which we now have a capture of.

Continued working on the paper I'm writing, mainly involving reading over and re-writing sections.

I've also been working through re-enrolment for the next year of my PhD, as I will be on holiday from the 14th March until April (when my enrolment ends).