User login

Shane Alcock's Blog

04

Aug

2014

Made a few minor tidyups to the TCPPing test. The main change was to pad IPv4 SYNs with 20 bytes of TCP NOOP options to ensure IPv4 and IPv6 tests to the same target will have the same packet size. Otherwise this could get confusing for users when they choose a packet size on the graph modal and find that they can't see IPv6 (or IPv4) results.

Now that we have three AMP tests that measure latency, we decided that it would be best if all of the latency tests could be viewed on the same graph, rather than there being a separate graph for each of DNS, ICMP and TCPPing. This required a fair amount of re-architecting of ampy to support views that span multiple collections -- we now have an 'amp-latency' view that can contain groups from any of the 'amp-dns', 'amp-icmp' and 'amp-tcpping' collections.

Added support for the amp-latency view to the website. The most time-consuming changes were re-designing the modal dialog for choosing which test results to add to an amp-latency graph, as now it needed to support all three latency collections (which all have quite different test options) on the same dialog. It gets quite complicated when you consider that we won't necessarily run all three tests to every target, e.g. no point in running a DNS test to www.wand.net.nz as it isn't a DNS server, so the dialog must ensure that all valid selections and no invalid selections are presented to the user. As a result, there's a lot of hiding and showing of modal components required based on what option the user has just changed.

Managed to get amp-latency views working on the website for the existing amp-icmp and amp-dns collections, but it should be a straightforward task to add amp-tcpping as well.

28

Jul

2014

Wrote a script to update an existing NNTSC database to add the necessary tables and columns for storing AS path data. Tested it on my existing test database and will roll it out to prophet once we're collecting AS path data and are sure that our database schema covers everything we want to store.

Added a TCP ping test to AMP. This turned out to be a lot more complicated than I had first anticipated, but I'm reasonably confident that we've got something working now. The test works by sending a TCP SYN to a predefined port on the target and measures how long it takes to get a TCP response (either a SYN ACK or a RST). We can also get an ICMP response, so we need to listen for that and report a failed result in that case. The complications arise in that the operating system typically handles the TCP handshake, so we have to pull a number of tricks to be able to send and receive TCP SYN and SYN ACK packets inside our test code.

Sending a SYN is easy enough using a raw socket, although we have to make sure we bind the source port using a separate socket to prevent the OS from allowing other applications to use it which would screw with our responses. Getting the response is a lot harder -- we have to work out which interface our SYN is going to use and attach a pcap live capture to that interface (filtering on traffic for our known source and dest ports + icmp). We find the interface by creating a UDP socket to our target and seeing which source address it binds to, then check the list of addresses returned by getifaddrs() to find a match. The match will tell us the name of interface that the address belongs to.

Any packets received on our pcap capture are checked to see if they match any of the SYNs that we had sent out. This is done by parsing the packet headers -- I felt dirty writing non-libtrace packet parsing code -- and looking to see if the ACK matched the sequence number of the packet we had sent (in the case of TCP) or if the embedded TCP header matched our original SYN (in the case of ICMP).

The test still has a few annoying limitations due to the nature of firewalls on the Internet these days. I had originally intended to allow the test to vary the packet size by adding payload to the SYN, which is technically legal TCP behaviour, but in testing I found that SYNs with extra payload will often be dropped and we'll get no response. Transparent proxies on the monitor side are also problematic, in that they will pre-emptively respond to SYNs on port 80 and therefore mess with our latency measurement, e.g. the Fortigate here at Waikato does this, which initially made me think I had a bug in my timestamping since I was getting sub-1 ms results for targets I knew were hundreds of ms away.

Deployed the TCP ping test on our Centos VM successfully and was able to collect some test data in a NNTSC database. Also updated netevmon to be able to process TCP ping latency measurements.

21

Jul

2014

Spent most of the week tidying up the NNTSC codebase. Replaced the dataparsers with proper OO classes, which removed a lot of repeated code and should make it easier to both write and maintain dataparsers. Also replaced most of the error reporting with exceptions rather than returning error codes everywhere.

Added support for storing AS paths alongside IP paths for amp-traceroute in NNTSC. The new improved traceroute test will eventually be able to report the AS for each detected hop, which is often more interesting and useful than the specific IP address. For instance, event detection can look for changes in the AS path rather than alerting when a new IP address is observed -- which can happen a lot when your path takes you through Google or Amazon EC2.

We can also colour our rainbow graph based on the AS rather than using a new colour for each address, which should hopefully reduce the patchwork-quilt effect while still being useful to look at.

15

Jul

2014

Released libtrace 3.0.20 on Monday.

Got as much of our development environment up and running again after the power outage over the weekend. There are still a few non-critical things that needed some assistance from Brad that I wasn't able to get going on Monday, but they can wait until next week when we're both here.

On leave from Tuesday for the rest of the week.

07

Jul

2014

Libtrace 3.0.20 has been released today.

This release fixes several bugs that have been reported by users, adds support for LZMA compression to libwandio and adds an API function for getting the fragment offset for an IP packet.

The bugs fixed in this release are:
* Fixed broken snaplen option for ring: input.
* Fixed trace_get_source_port and trace_get_destination_port returning bogus port numbers when given a fragmented packet.
* Fixed timestamp byte ordering on big endian architectures.
* Removed assert failure if a bad compression level or method is provided when configuring an output trace. A libtrace error is raised instead.
* Fixed broken compiler feature checking in configure script. Compiler features are also detected for compilers other than gcc, e.g. clang.
* Fixed potential segfaults in OSPF libpacketdump parser if the packet is truncated midway through the OSPF header.

The OSPF bug fix unfortunately resulted in the 'len' field in the libtrace_ospf_t structure being renamed to 'ospf_len' -- if you are using libtrace to process OSPF packets, please make sure you update your code accordingly.

The full list of changes in this release can be found in the libtrace ChangeLog.

You can download the new version of libtrace from the libtrace website.

07

Jul

2014

Carrying on from last week, storing a cache entry per stream turned out to be a bad idea. Some matrix meshes consist of 100s of streams so we spend a lot of time looking up cache entries. As a result, I rewrote the caching code to store one dictionary per collection, mapping stream ids to tuples containing the timestamps. This gets looked up once per query, so only one cache operation is required to generate a matrix.

Updating the cache when we have to query for missing values is a bit annoying, as we cannot simply update the dictionary and put it back in the cache once the query is complete as the data inserting process may have updated other cache entries with new 'most recent data' timestamps while we were fulfilling our query. Instead, we have to re-fetch the dictionary, update the one stream we're changing and then immediately store the dictionary again.

Updated ampy to no longer keep track of active streams and removed support for ACTIVE_STREAMS queries from the NNTSC protocol.

Merged Perry's lzma support into libwandio. Started working towards a new libtrace release -- managed to build and pass all tests on our various development boxes so should be able to push out a release next week.

Spent a day reading over Meenakshee's thesis. Suggested a series of mostly minor edits and changes but overall it is looking pretty good.

30

Jun

2014

Found and fixed a large memory leak in netevmon that had caused prophet to run out of memory over the weekend. The problem was that I was allocated space for storing IPv6 address strings in the Traceroute detector but not freeing it properly if the address was already in our LRU. Also took the opportunity to make our memory use for Traceroute much more efficient, i.e. having a global hop LRU across all traceroute streams rather than one per stream which was leading to a lot of duplication.

Started looking into our insertion speed problems. One obvious source of slowdowns is the UPDATE that we use to remember when we last inserted data for a stream. This update is being called once per measurement interval for each collection and becomes quite onerous when the streams table gets very large. Implemented a solution where the first and last insertion for each stream is stored in memcache instead of the database. If there is no entry in memcache when a query comes in for the stream, we can query the data table for that stream for min and max timestamp instead, although this is a slightly expensive operation.

Once I had that working, I removed the 'streams' table from NNTSC entirely as it was no longer needed (each collection has its own stream table with specific details about each stream; the streams table was mainly for storing common properties across all collections like lasttimestamp). This meant I had to remove or change all references in the NNTSC database code to the streams table but was otherwise straightforward.

Spent Friday fixing a bug in libtrace where trace_get_source_port and trace_get_destination_port would return bogus values if called on fragmented packets. Added a new API function for getting the fragment offset and more fragment flag from a packet. I needed this anyway for fixing the bug and given the amount of bit-shifting, masking, multiplying and header parsing (for v6) involved, it would probably be useful to other people as well.

23

Jun

2014

Short week last week, after being sick on Monday and recovering from a spot of minor surgery on Thursday and Friday.

Finished adding support for the amp-throughput test to amp-web, so we can now browse and view graphs for amp-throughput data. Once again, some fiddling with the modal code was required to ensure the modal dialog for the new collection supported all the required selection options.

Lines for basic time series graphs (i.e. non-smokeping style graphs) will now highlight and show tooltips if moused over on the detail graph, just like the smoke graphs do. This was an annoying inconsistency that had been ignored because all of the existing amp collections before now used the smoke line style. Also fixed the hit detection code for the highlighting so that it would work if a vertical line segment was moused over -- previously we were only matching on the horizontal segments.

17

Jun

2014

Reworked how aggregation binsizes are calculated for the graphs. There is now a fixed set of aggregation levels that can be chosen, based on the time period being shown on the graph. This means that we should hit cached data a lot more often rather than choosing a new binsize every few zoom levels. Increased the minimum binsize to 300 seconds for all non-amp graphs and 60 seconds for amp graphs. This will help avoid problems where the binsize was smaller than the measurement frequency, resulting in empty bins that we had to recognise were not gaps in the data.

Added new matrices for DNS data, one showing relative latency and the other showing absolute latency. These act much like the existing latency matrices, except we have to be a lot smarter about which streams we use for colouring the matrix cell. If there are any non-recursive tests, we will use the streams for those tests as these are presumably cases where we are querying an authoritative server. Otherwise, we assume we are testing a public DNS server and use the results from querying for 'google.com', as this is a name that is most likely to be cached. This will require us to always schedule a 'google.com' test for any non-authoritative servers that we test, but that's probably not a bad idea anyway.

Wrote a script to more easily update the amp-meta databases to add new targets and update mesh memberships. Used this script to completely replace the meshes on prophet to better reflect the test schedules that we are running on hosts that report to prophet.

Merged the new ampy/amp-web into the develop branch, so hopefully Brad and I will be able to push out these changes to the main website soon.

Started working on adding support for the throughput test to ampy. Hopefully all the changes I have made over the past few weeks will make this a lot easier.

09

Jun

2014

Finally went ahead with the database migration on skeptic. Upgrading NNTSC, ampy and amp-web to latest development versions went relatively smoothly and the graphs on amp.wand.net.nz are now much quicker to load. Insertion speed is now our primary concern for the future, as our attempts to speed up inserts by committing less often did not produce great returns. Firstly, we run into lock limits very quickly as each table we insert into requires a separate lock until we commit the transaction. Secondly, the commits take a significant amount of time that we are not using to process new messages -- will need to look into separating message processing and database operations into separate threads.

Finished deploying the new ampy and amp-web on prophet. The modal dialog code needed to be quite heavily modified to support the new way that ampy handles getting selection options. Also found and fixed a number of old bugs relating to how we fetch and organise time series data, including the big one where a time series would appear to jump backwards. Will do a bit more testing but soon we'll be ready to deploy this on skeptic as well.