User login

Richard Sanger's blog

23

Aug

2016

I'm working on the first stages of placing rules based on the dependency graph. As a starting point I'm looking at narrowing down possible placements for individual rules as quickly as possible. To begin with I'm taking all possible placement of every rule, then for each dependency keeping a list of those locations that satisfy the original dependency.

Any single rule may have a number of dependencies, with each dependency being satisfied by a subset of possible placements. The common placements across dependencies are then chosen as the best starting points. Currently only single dependencies are being considered, I will extend this to consider the entire tree. I'm currently not considering possible types of traffic that will hit a given rule, such may exclude portions of traffic from hitting a rule which could be that portion that is problematic to place.

I also tidied the code base a bit and added everything back into git. After splitting a base TTP library out from the matching portion of the code and from the executable tools a couple of weeks ago, I had not got around to adding these back to git. I also split up a couple of the tools, the flow collection tool is now standalone and will collect every possible metric it can query from OpenFlow, including groups which I originally forgot to collect and other info a switch will present.

17

Aug

2016

Worked on displaying the dependencies between rules visually (as found using the cache flow technique), by plotting them using the python networkx/matplotlib library. I created a simple algorithm for placing nodes (representing rules) to ensure that dependencies are directed top to bottom. The default layouts tried to put equal space between nodes and resulted in a jumbled mess. I used this to verify the results match in some sample cases. I also thought about and added dependencies between tables, in this case a dependency exists between a rule with a goto and those rules which that goto overlaps with in the next table.

I used the visual graph to check the results against examples in the original paper, which I've have also written into tests. It also gave an overview when run against sample sets of Flow Rules as to where and how many dependencies there are. While these dependencies do exist, in some cases they might be condensible into a single table - where as others might require table separation (for example double vlan tagging requires at least 2 tables).

I also worked a bit more on validating TTP patterns, the suggestion for this was to release it as a web validator. As such I've added reverse mappings from TTP Objects to original line numbers in the json which can be printed in the logged messages. I've taken these and added them to a static html page. Along the way I've also found and fixed a couple of minor bugs.

10

Aug

2016

Further investigation into the packet space library this week. I've found a function which attempts to remove overlapping portions of the header space union and the lazily subtracted portion, possibly emptying the set entirely. This is not perfect, but it detects some common cases, thinking about this problem more I believe it cannot achieved accurately without fully expanding all cases out - which is slow and could exhaust memory. However for my purposes I think I can live with a couple of false dependencies. I might make some minor tweaks to detect more cases, but this depends on how it goes.

I'm also aware of some algorithms to do OpenFlow rule minimisation, this should map directly into simplifying header space and could be used to detect more cases in which the sets become empty. However I doubt the runtime trade-off would be worth it, and I would have to implement that algorithm myself.

I quickly benchmarked the header space wildcard library (which calls to a C library) against a pure python implementation. The results where mixed; with simple operations the pure python approach was faster however with some more complex operations C became faster. I was considering making this pure python as it would allow me to put my conversion to and from OpenFlow match rules in to the object. However for now I don't think it is worth investing the time.

I've implemented the simple algorithm for building a dependency tree as described in CacheFlow. And I've compared the lazy execution against the regular subtraction operation (with the cases I can run without running out of memory) and I have seen the identical results with the lazy minus.

I fired up the packet in and out benchmarks again and run them against our new AT. Brad also updated the HP, and I attempted to encrypted control channel on that again however got no further with it.

03

Aug

2016

This week I have been looking at the header space analysis code and it's suitability to be used to build dependency tree's and possibly model the pipeline and use it for verification. In header space a packet header can be modelled as a series of bits of a fixed with, with each carrying a value of 0, 1 or both wildcarded (an x). A header space is a description of the values a packet header could carry and it is made up of a union of fixed width header space bits (or none in the case it is empty). Header space defines as set of logic operators that can be applied to header spaces, such as subtraction, addition, detecting subsets, inversion etc. Beyond this the research also explorers using it to model networks and detect loops etc, something that could be useful in a verification capacity.

As such I have looked through some of the examples included and implemented some simple wrappers to put all the OpenFlow fields into header space. Including a friendly way to read these fields back. I've also started initial testing of header space with some real data. I've found that subtracting is a not a feasible option due to the expansion issue it causes, this occurs due to representing a solution as a series of unions, applying subtraction essentially requires all other possible values to be explicitly listed in the union (but not the one subtracted). The solution discussed by the header space paper, lazy subtraction, is required for my usage. However it does not seem that this works well with detecting when a set has become empty or when terms cancel each other out etc. More investigation is required to see how necessary this is and if it can be worked around.

I also spent sometime reading over Dimeji's masters thesis and give him my feedback for a final time before he handed it in at the end of the week.

27

Jul

2016

I've been looking at the best way to split up my library into a base Table Type Pattern parsing/validation portion and the rest of my code matching flows into this pattern.
This gives two advantages firstly it will help to make the code a little more manageable as it is getting fairly large with a lot the code being in a single file and also would allow a Table Type Pattern validation tool and viewer to be released without releasing the flow matching portion. Tools for dealing with Table Type Patterns are essentially non existent so this would be a useful to others.

The natural design of the Table Type Pattern code made this split difficult. A Table Type Pattern is a tree of nesting objects, of differing types dependent on where it is. For example a Flow will include a list of Instructions and a list of Matches, and that list of Instructions may include a list of Apply or Write Actions. My code has followed this pattern with objects initialisation creating children of the the expected type recursively until the entire TTP is loaded. My flow matching part works in a similar way by calling satisfies, which is then run recursively on the entire structure.

In this case traditional sub-classing does not work because all references in the parent object types would need to be updated to create the correct sub-class. A mapping could be used to ensure the correct class is created. However, all classes are already subclasses of TTPObject and some of TTPList - using a mapping would still not allow the base class to be updated because this is specified when a sub-classes are created.

As such I decided the best way to solve this problem was to create a decorator that allowed classes to be extended by merging a new class into the original, this looks similar to partial classes in C#. The method I used also allows fake subclassing by allowing the original version of a method to be called from the new version, while still writing the new one to the class. Due to how python creates classes updating a base class will apply to all sub-classes.

Once I figured out this method, I've made the split and I've also worked through tidying some of the docs and code as I go.

19

Jul

2016

This week I worked on creating and installing a single flow into the of-dpa pipeline automatically. To do this the computed position of a rule needed to be converted back to a ryu flowmod. Because I'm only looking at a single rule and not splitting it up currently it seems like the original flow could simply be installed with an updated table number. However it is not this simple in of-dpa, as almost all output actions are indirected via a group and require specific priorities on rules to install them.

As such the first step was updating the satisfies() code to return not only the remaining portion of the match but also to return the flow rule required to install it. This allows the priority and groups to be passed back. Secondly I added code to convert from the internal format back into ryu flowmods and groupmods, including of-dpa specific logic for creating groups numbering.

I also read over a draft of Dimeji's masters thesis and have given him my feedback.

11

Jul

2016

Short 2 day week this week before heading down to Wellington for SDNcon.

I added the 'all' group support so that we can handle the multicast case in ofdpa. I spent the remaining time writing tests for the satisfy match part of the TTP code. This included the checking for correct processing of the 'meta' elements within the lists (which are used throughout TTP's).

For SDNcon Brad, Kit, Michael and I teamed up implement faucet, a VLAN switch, for the ofdpa pipeline. We were successful in doing so, the code is now up here https://github.com/wandsdn/faucet.

SDNcon was great to meet up with other researchers. I had a chance to meet students from Victoria and UNSW. Victoria also has students working on ofdpa with an AS4600, like us, and I was able to get some newer firmware from them - I'm still to test this. It seems that everyone is having similar issues. I look forward to keeping in touch and sharing knowledge with everyone.

05

Jul

2016

Still working towards the goal of generating and installing some rules into the ofdpa pipeline. I've continued with processing groups in the TTP, this is needed in the of-dpa pipeline as groups are needed to install output actions. I've also found that in some other cases other actions need to be applied in the group too rather than in the instructions. One such that is seen is the pop_vlan, which is done in the group rather then the instruction. As such I now walk into the groups and their buckets and process the actions included. However this is still not catching all cases, I still need to implement multicast and broadcast, which in ofdpa is control by a group with group_type="all" in which its buckets reference indirect groups which output to a port. This case is interesting because it is a copy operation to each indirect group, where each indirect group can separately choose if it pops a vlan or not.

An updated version of the of-dpa TTP pipeline has come out this week, it looks like they've added one or two fields, however there are no code and or documentation updates accompanying it. I've spent some time working through fixing up the table type pattern (a script of regex replacements which works with the new and old versions), I'm now fully parsing all the identifiers and groups that have been defined and have corrected a number of typos. I hope to push these changes back to the github soon.

29

Jun

2016

Working towards a short term goal of trying to install automatically generated flows in the ofdpa pipeline, before tackling the larger issue of dependencies between rules. This requires me to install entire Flows match+instructions/actions, as such I've been working on adding instruction and action matching, previously I've only been considering the match portion.

I have been working through the process of parsing in instructions and actions much like I have already done for the match portions for both the TTP and converting ryu flows to a intermediary representation. As part of parsing the action I've created a simple topological sort which returns a normalised representation for apply actions, as this action list is applied to the packet in the order specified, however only some actions are dependent on earlier ones. I've restructured some common code parsing meta items in lists into a TTPList type which handles the parsing of meta items and the logic around satisfying the condition imposed.

For now apply actions are only be matched with apply and write with write, however both will be considered in the future.

The next step is parsing groups, which also include instructions in their buckets!! and are required to output packets in the ofdpa pipeline.

22

Jun

2016

In the past couple of weeks I've been working further on processing Table Type Patterns and matching rules into these. In terms of processing Table Type Patterns I have extended the fitting of existing flow rules into the pipeline to consider which prior tables a packet is required to traverse in order to get to a table which a rule will fit. Then given that path check that flow rules that will match the required packets can in fact be installed. All of this is currently only considering the match component.

I've also been working on the task of splitting a single table of rules in to multiple tables. This process involves finding which parts of a flow rules accurately predicts its actions, i.e. finding an ideal place a set of rules can be split across tables. This could be useful in order to fit a restricted pipeline, or find further optimisations (Note in an unrestricted pipeline lower priority rules can trivially be place in a later table with a goto linking the tables). One such optimisation I'm working on is reversing the Cartesian product, as is seen in switching any source mac can go to any dst mac, resulting in a n^2 expansion when placed into a single table. As such I try to detect this case once flows have been split and condense down these rules. To get this working well I've found adding fake rules that go out of the in_port help, as openflow will drop these packets and have attempted normalising cases such as when vlans are present such that all untagged packets are promoted to include vlans resulting in more rules overlapping.

With both the processing of Table Type Patterns and Splitting of flow rules I've been working on some basic unit tests. I've found unit tests useful as this code is starting to grow in size to detect bugs etc.

Last week with Brad found a newer version of ofdpa for the Accton and with his assistance we have successfully installed this. I've updated my simplest switch to work on the newer pipeline and install some functional rules. So far I've found somewhat of a version mismatch between ofdpa and indigo agent missing some match fields required to install certain types of rules, however in the case of simplest switch I've worked around this. I've emailed Accton to see if it is possible to get an updated version. Despite this, the version of ofdpa seems very functional and I'm hoping to be able to do most things I want on it.