User login

Stephen Eichler's blog

07

Dec

2015

Tony McGregor has critiqued the background chapter of my thesis, and I have been making changes.

I reread the entire thesis and made a few corrections.

23

Nov

2015

Had a meeting with Tony McGregor about the introduction. Took away some critique sheets and used these to make changes and improvements.

Continued to read through the rest of the chapters so that I can, pass on a draft and get some critiques from Matthew and Richard on the later chapters, in particular.

17

Nov

2015

Gave a talk at the internal PhD student conference on my thesis chapters.

I have been checking through my entire thesis for logical flow, and grammar errors, including use of 'which' and 'that', and commas. After I finish it will be forwarded to two more people to critique.

15

Oct

2015

The discussion and conclusions section of my thesis was updated to include observed changes in load balancer prevalence. Various other changes were made including moving some discussion to the chapters rather than the final discussion chapter.

A draft of my PhD conference talk was put together. The talk will introduce my thesis including all of its chapters.

05

Oct

2015

Improvements to the introduction and conclusions in my thesis have been carried out. Details of contributions were added and more discussion of the outcomes of simulation were added to try and tie the story together.

I commented that Megatree does not require repeated destinations to the same extent that Doubletree does.

28

Sep

2015

Started to address the issue of reworking the Introduction and conclusions of my thesis now that the rest of it has had the once over.

Tried to find a topic for the internal PhD conference. I might address stopping values that we generated, and observations on the data collected using these stopping values and a high confidence setting.

21

Sep

2015

Updates were made to my thesis draft chapters: background and related work. This was based on feedback that I have received from my chief supervisor.

I found some more papers that I should refer to in related work and discussion. These were found using a search for Doubletree and cost analysis. In this work the authors refer to upgrading Max-Delta and building in Doubletree. In this work all Traceroute topology information is shared between vantage points. Steps are taken to reduce the size of the topology data set and decisions are made on the fly as to which destinations are the best choice for improving topology discovery coverage of the Internet from each vantage point.

14

Sep

2015

The differences between the two algorithms that I used to generate stopping values were further examined. The Veitch algorithm I mentioned last week uses a universal bound on failure probability. This seems to justify the higher conditional failure probabilities it allows for the lower numbers of successors. This makes its stopping values smaller than those I obtained using a Monte Carlo algorithm based on assuming a maximum of thirty nodes in a path and conditional failure probabilities set according to that assumption.

The chapters on Megatree and Efficient discovery of load balancer successors were updated based on feedback.

07

Sep

2015

The differences between the two algorithms that I used to generate stopping values were examined. The most obvious difference was the alpha values used. These are derived from the confidence level and generate a bound usually to do with the maximum likely number of nodes to be examined by traceroute MDA. Alpha is the probability of a type one error for the analysis of one node in a path. This is the probability of failing to find a successor node. When I substituted the alpha values from my Monte Carlo algorithm into the published theoretical Veitch algorithm they gave almost identicle results for predicting stopping values. The difference was that the Veitch algorithm used a power series for alpha based on confidence as the number of stopping values varied, whereas ours used a single result based on the assumption of a maximum of thirty nodes. It seems that the Veitch method may give a lower than expected overall confidence for the path where the number of successors is mostly low.

The reason for using two algorithms was that ours was able to make twice as many predictions of stopping values with the available computing power.

31

Aug

2015

Carried out corrections on the Doubletree chapters of my thesis draft. These study the IS0 simulator and the BISD simulator. IS0 was created by Tony McGregor and adapted to sources windows in this research. BISD (Basic Internet Simulator of Doubletree) was created as part of this research and examines the same problem only it was used to make use of data containing repeated destinations. This was not the case for the Team data downloaded from CAIDA. This meant that the usefulness of both stop sets could be determined together. BISD is based on the trace by trace warts analysis software used with scamper.