My pre-1.0 wishlist

Eric S. Raymond esr at
Sun Jun 5 21:45:10 UTC 2016

Daniel Franke <dfoxfranke at>:
> *What* research-grade problem? Dave Mills already solved the
> research-grade part of the problem decades ago. The statistics we
> should be monitoring are already collected
> by ntpd and exported in machine-readable form through ntpq. Sample
> these statistics from version A and version B. From there it's matter
> of figuring out whether they line up -- and Kolmogorov showed us how
> to do that part close to a *century* ago. Anyway, the fine points of
> our statistical methodology are seldom going to matter: I think bugs
> like "we degraded our precision by 20%" are going to be pretty rare
> compared to "this configuration used to work, and now it's completely
> broken".

OK, I though you were thought about developing some new figure of
merit from first principles.  If ntpd is already generating the stats
we need and their utility is generally accepted by the time-service
community, that makes both the technical and customer-relations ends
of the problem easier.

But I can't get from your summary description to code.  So I need a
white paper from you on applying this technique that turns that theory
into actionable advice.  How to we test?  What do we test?  What are
our success-failure criteria?
		<a href="">Eric S. Raymond</a>

More information about the devel mailing list