My pre-1.0 wishlist
hmurray at megapathdsl.net
Mon Jun 6 02:02:18 UTC 2016
esr at thyrsus.com said:
> But I can't get from your summary description to code. So I need a white
> paper from you on applying this technique that turns that theory into
> actionable advice. How to we test? What do we test? What are our
> success-failure criteria?
We don't need a yes/no answer. There is a 3rd state: needs-manual-review.
The idea is to be able to focus the manual review on the interesting cases
rather than reviewing everything. To start with, we will have to review
everything. When we get familiar with things, we can automate skipping the
not-interesting cases. (I think.)
I think I can turn the general idea into at least the start of an outline for
Assume the great test farm in the sky. Machines A, B, and C have good
refclocks and are setup to watch all the systems we want to test. (watch ==
server foo noselect)
Ignoring the startup transients (details tbd), scan the rawstats on A, B, and
C, selecting the lines that refer to machine X. For each line, compute the
round trip time and offset.
Make a histogram of the round trip times. Use coarse bucket sizes (from a
There should be none in the first bucket.
Most should be in the second bucket.
A few in the 3rd bucket are OK.
There should be none in the 4th bucket.
There shouldn't be any dropped packets.
Make a similar histogram for the offset with similar bucket logic.
Offsets should be symmetric around 0 so the details are slightly different.
The bucket slots are setup by hand for each target machine, probably by
looking at a histogram. We may have to recalibrate things if the network
We can probably apply similar heuristics to the target machines loopstats and
These are my opinions. I hate spam.
More information about the devel